ACM Trans. Embed. Comput. Syst.最新文献

筛选
英文 中文
Verifying Stochastic Hybrid Systems with Temporal Logic Specifications via Model Reduction 用模型约简验证具有时间逻辑规范的随机混合系统
ACM Trans. Embed. Comput. Syst. Pub Date : 2020-09-16 DOI: 10.1145/3483380
Yu Wang, Nima Roohi, Matthew West, Mahesh Viswanathan, G. Dullerud
{"title":"Verifying Stochastic Hybrid Systems with Temporal Logic Specifications via Model Reduction","authors":"Yu Wang, Nima Roohi, Matthew West, Mahesh Viswanathan, G. Dullerud","doi":"10.1145/3483380","DOIUrl":"https://doi.org/10.1145/3483380","url":null,"abstract":"We present a scalable methodology to verify stochastic hybrid systems for inequality linear temporal logic (iLTL) or inequality metric interval temporal logic (iMITL). Using the Mori–Zwanzig reduction method, we construct a finite-state Markov chain reduction of a given stochastic hybrid system and prove that this reduced Markov chain is approximately equivalent to the original system in a distributional sense. Approximate equivalence of the stochastic hybrid system and its Markov chain reduction means that analyzing the Markov chain with respect to a suitably strengthened property allows us to conclude whether the original stochastic hybrid system meets its temporal logic specifications. Based on this, we propose the first statistical model checking algorithms to verify stochastic hybrid systems against correctness properties, expressed in iLTL or iMITL. The scalability of the proposed algorithms is demonstrated by a case study.","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122242748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DSTL: A Demand-Based Shingled Translation Layer for Enabling Adaptive Address Mapping on SMR Drives DSTL:在SMR驱动器上实现自适应地址映射的基于需求的Shingled转换层
ACM Trans. Embed. Comput. Syst. Pub Date : 2020-07-16 DOI: 10.1145/3391892
Yi-Jing Chuang, Shuo-Han Chen, Yuan-Hao Chang, Yu-Pei Liang, H. Wei, W. Shih
{"title":"DSTL: A Demand-Based Shingled Translation Layer for Enabling Adaptive Address Mapping on SMR Drives","authors":"Yi-Jing Chuang, Shuo-Han Chen, Yuan-Hao Chang, Yu-Pei Liang, H. Wei, W. Shih","doi":"10.1145/3391892","DOIUrl":"https://doi.org/10.1145/3391892","url":null,"abstract":"Shingled magnetic recording (SMR) is regarded as a promising technology for resolving the areal density limitation of conventional magnetic recording hard disk drives. Among different types of SMR drives, drive-managed SMR (DM-SMR) requires no changes on the host software and is widely used in today’s consumer market. DM-SMR employs a shingled translation layer (STL) to hide its inherent sequential-write constraint from the host software and emulate the SMR drive as a block device via maintaining logical to physical block address mapping entries. However, because most existing STL designs do not simultaneously consider the access pattern and the data update frequency of incoming workloads, those mapping entries maintained within the STL cannot be effectively managed, thus inducing unnecessary performance overhead. To resolve the inefficiency of existing STL designs, this article proposes a demand-based STL (DSTL) to simultaneously consider the access pattern and update frequency of incoming data streams to enhance the access performance of DM-SMR. The proposed design was evaluated by a series of experiments, and the results show that the proposed DSTL can outperform other SMR management approach by up to 86.69% in terms of read/write performance.","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114674873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
LAMBDA: Lightweight Assessment of Malware for emBeddeD Architectures LAMBDA:嵌入式架构中恶意软件的轻量级评估
ACM Trans. Embed. Comput. Syst. Pub Date : 2020-07-16 DOI: 10.1145/3390855
S. Kadiyala, Manaar Alam, Yash Shrivastava, Sikhar Patranabis, Muhamed Fauzi Bin Abbas, A. Biswas, Debdeep Mukhopadhyay, T. Srikanthan
{"title":"LAMBDA: Lightweight Assessment of Malware for emBeddeD Architectures","authors":"S. Kadiyala, Manaar Alam, Yash Shrivastava, Sikhar Patranabis, Muhamed Fauzi Bin Abbas, A. Biswas, Debdeep Mukhopadhyay, T. Srikanthan","doi":"10.1145/3390855","DOIUrl":"https://doi.org/10.1145/3390855","url":null,"abstract":"SAI PRAVEEN KADIYALA, Nanyang Technological University, Singapore MANAAR ALAM, YASH SHRIVASTAVA, and SIKHAR PATRANABIS, Indian Institute of Technology Kharagpur, India MUHAMED FAUZI BIN ABBAS, Nanyang Technological University, Singapore ARNAB KUMAR BISWAS, National University of Singapore, Singapore DEBDEEP MUKHOPADHYAY, Indian Institute of Technology Kharagpur, India THAMBIPILLAI SRIKANTHAN, Nanyang Technological University, Singapore","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115205957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
FFConv: An FPGA-based Accelerator for Fast Convolution Layers in Convolutional Neural Networks FFConv:基于fpga的卷积神经网络快速卷积层加速器
ACM Trans. Embed. Comput. Syst. Pub Date : 2020-03-17 DOI: 10.1145/3380548
Afzal Ahmad, Muhammad Adeel Pasha
{"title":"FFConv: An FPGA-based Accelerator for Fast Convolution Layers in Convolutional Neural Networks","authors":"Afzal Ahmad, Muhammad Adeel Pasha","doi":"10.1145/3380548","DOIUrl":"https://doi.org/10.1145/3380548","url":null,"abstract":"ing with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. © 2020 Association for Computing Machinery. 1539-9087/2020/03-ART15 $15.00 https://doi.org/10.1145/3380548 ACM Transactions on Embedded Computing Systems, Vol. 19, No. 2, Article 15. Publication date: March 2020. 15:2 A. Ahmad and M. A. Pasha state-of-the-art algorithms for computer vision tasks such as image classification, object detection, and semantic segmentation [14, 16, 21]. While the accuracy achieved by CNNs on these tasks is unparalleled, their extreme computational budgets limit wider implementation. While Graphical Processing Units (GPUs) are being used to deploy CNN architectures exploiting the algorithmic parallelism over the many-cores that they provide [6], their power consumption is high and their architectures are more generic. Owing to their reconfigurability, Field Programmable Gate Array (FPGA)-based implementations [22, 29, 41] are being explored to design parallel and pipelined network architectures that give improved performance and power efficiency compared to general-purpose processors (CPUs) and GPUs. While more custom solutions in the form of Application-Specific Integrated Circuits (ASICs) can be implemented that further improve the performance and power efficiency compared to their FPGA-based counterparts [3], ASIC-based designs are rigid, hence may only be justified at the stage of final implementation when thorough testing and prototyping has been done on a more reconfigurable FPGA-based platform. Significant research effort is also being put into optimizing different layers of CNNs to gain improvements in the performance metrics for a wider range of CNN architectures and hardware platforms. Fast Fourier Transforms (FFT)-based convolutions have shown significant gains in reducing the computational complexity of convolutional layers that use large kernel sizes (≥7 × 7) implemented on GPU platforms [31]. Although this reduction in computational complexity offered by FFT-based convolution is significant for large kernel sizes, modern neural network architectures, such as VGGNet [28], ResNet [16], MobileNets [17], and GoogLeNet [30], tend towards smaller kernel sizes and deeper topologies. FFT-based convolutions have actually been shown to increase the overall computation time of layers that use smaller kernel sizes by as much as 16× [31]. Winograd minimal filtering [34] based fast convolution algorithms (we will refer to them as “fast-conv” from here onwards) have also been proposed and have shown significant improvements for small kernel sizes, applicable to most modern networks [20]. Fast-conv algorithms work by reducing the computational complexity of expensive operations while adding transform stages that increase the number of cheaper operations involved in the convolution. Furthermore, the arithmetic cost of t","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125982453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
REAL: REquest Arbitration in Last Level Caches 在最后一层缓存请求仲裁
ACM Trans. Embed. Comput. Syst. Pub Date : 2020-01-22 DOI: 10.1145/3362100
Sakshi Tiwari, Shreshth Tuli, Isaar Ahmad, Ayushi Agarwal, P. Panda, S. Subramoney
{"title":"REAL: REquest Arbitration in Last Level Caches","authors":"Sakshi Tiwari, Shreshth Tuli, Isaar Ahmad, Ayushi Agarwal, P. Panda, S. Subramoney","doi":"10.1145/3362100","DOIUrl":"https://doi.org/10.1145/3362100","url":null,"abstract":"Shared last level caches (LLC) of multicore systems-on-chip are subject to a significant amount of contention over a limited bandwidth, resulting in major performance bottlenecks that make the issue a first-order concern in modern multiprocessor systems-on-chip. Even though shared cache space partitioning has been extensively studied in the past, the problem of cache bandwidth partitioning has not received sufficient attention. We demonstrate the occurrence of such contention and the resulting impact on the overall system performance. To address the issue, we perform detailed simulations to study the impact of different parameters and propose a novel cache bandwidth partitioning technique, called REAL, that arbitrates among cache access requests originating from different processor cores. It monitors the LLC access patterns to dynamically assign a priority value to each core. Experimental results on different mixes of benchmarks show up to 2.13× overall system speedup over baseline policies, with minimal impact on energy.","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131016474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Editorial: To Use or Not To? Embedded Systems for Voting 社论:用还是不用?嵌入式投票系统
ACM Trans. Embed. Comput. Syst. Pub Date : 2018-06-02 DOI: 10.1145/3206342
S. Shukla
{"title":"Editorial: To Use or Not To? Embedded Systems for Voting","authors":"S. Shukla","doi":"10.1145/3206342","DOIUrl":"https://doi.org/10.1145/3206342","url":null,"abstract":"","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133344056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Heuristics on Reachability Trees for Bicriteria Scheduling of Stream Graphs on Heterogeneous Multiprocessor Architectures 异构多处理器架构流图双准则调度的可达树启发式研究
ACM Trans. Embed. Comput. Syst. Pub Date : 2015-03-25 DOI: 10.1145/2638553
Avinash Malik, David Gregg
{"title":"Heuristics on Reachability Trees for Bicriteria Scheduling of Stream Graphs on Heterogeneous Multiprocessor Architectures","authors":"Avinash Malik, David Gregg","doi":"10.1145/2638553","DOIUrl":"https://doi.org/10.1145/2638553","url":null,"abstract":"In this article, we partition and schedule Synchronous Dataflow (SDF) graphs onto heterogeneous execution architectures in such a way as to minimize energy consumption and maximize throughput. Partitioning and scheduling SDF graphs onto homogeneous architectures is a well-known NP-hard problem. The heterogeneity of the execution architecture makes our problem exponentially challenging to solve. We model the problem as a weighted sum and solve it using novel state space exploration inspired from the theory of parallel automata. The resultant heuristic algorithm results in good scheduling when implemented in an existing stream framework.","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131720677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multilevel Phase Analysis 多级相位分析
ACM Trans. Embed. Comput. Syst. Pub Date : 2015-03-25 DOI: 10.1145/2629594
Weihua Zhang, Jiaxin Li, Yi Li, Haibo Chen
{"title":"Multilevel Phase Analysis","authors":"Weihua Zhang, Jiaxin Li, Yi Li, Haibo Chen","doi":"10.1145/2629594","DOIUrl":"https://doi.org/10.1145/2629594","url":null,"abstract":"Phase analysis, which classifies the set of execution intervals with similar execution behavior and resource requirements, has been widely used in a variety of systems, including dynamic cache reconfiguration, prefetching, race detection, and sampling simulation. Although phase granularity has been a major factor in the accuracy of phase analysis, it has not been well investigated, and most systems usually adopt a fine-grained scheme. However, such a scheme can only take account of recent local phase information and could be frequently interfered by temporary noise due to instant phase changes, which might notably limit the accuracy.\u0000 In this article, we make the first investigation on the potential of multilevel phase analysis (MLPA), where different granularity phase analyses are combined together to improve the overall accuracy. The key observation is that the coarse-grained intervals belonging to the same phase usually consist of stably distributed fine-grained phases. Moreover, the phase of a coarse-grained interval can be accurately identified based on the fine-grained intervals at the beginning of its execution. Based on the observation, we design and implement an MLPA scheme. In such a scheme, a coarse-grained phase is first identified based on the fine-grained intervals at the beginning of its execution. The following fine-grained phases in it are then predicted based on the sequence of fine-grained phases in the coarse-grained phase. Experimental results show that such a scheme can notably improve the prediction accuracy. Using a Markov fine-grained phase predictor as the baseline, MLPA can improve prediction accuracy by 20%, 39%, and 29% for next phase, phase change, and phase length prediction for SPEC2000, respectively, yet incur only about 2% time overhead and 40% space overhead (about 360 bytes in total). To demonstrate the effectiveness of MLPA, we apply it to a dynamic cache reconfiguration system that dynamically adjusts the cache size to reduce the power consumption and access time of the data cache. Experimental results show that MLPA can further reduce the average cache size by 15% compared to the fine-grained scheme.\u0000 Moreover, for MLPA, we also observe that coarse-grained phases can better capture the overall program characteristics with fewer of phases and the last representative phase could be classified in a very early program position, leading to fewer execution internals being functionally simulated. Based on this observation, we also design a multilevel sampling simulation technique that combines both fine- and coarse-grained phase analysis for sampling simulation. Such a scheme uses fine-grained simulation points to represent only the selected coarse-grained simulation points instead of the entire program execution; thus, it could further reduce both the functional and detailed simulation time. Experimental results show that MLPA for sampling simulation can achieve a speedup in simulation time of about 8.3X with simi","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130869259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Plugging Versus Logging: Adaptive Buffer Management for Hybrid-Mapping SSDs 插入与记录:混合映射ssd的自适应缓冲管理
ACM Trans. Embed. Comput. Syst. Pub Date : 2015-03-25 DOI: 10.1145/2629455
Li-Pin Chang, Yo-Chuan Su, I-Chen Wu
{"title":"Plugging Versus Logging: Adaptive Buffer Management for Hybrid-Mapping SSDs","authors":"Li-Pin Chang, Yo-Chuan Su, I-Chen Wu","doi":"10.1145/2629455","DOIUrl":"https://doi.org/10.1145/2629455","url":null,"abstract":"A promising technique to improve the write performance of solid-state disks (SSDs) is to use a disk write buffer. The goals of a write buffer is not only to reduce the write traffic to the flash chips but also to convert host write patterns into long and sequential write bursts. This study proposes a new buffer design consisting of a replacement policy and a write-back policy. The buffer monitors how the host workload stresses the flash translation layer upon garbage collection. This is used to dynamically adjust its replacement and write-back strategies for a good balance between write sequentiality and write randomness. When the garbage collection overhead is low, the write buffer favors high write sequentiality over low write randomness. When the flash translation layer observes a high overhead of garbage collection, the write buffer favors low write randomness over high write sequentiality. The proposed buffer design outperformed existing approaches by up to 20% under various workloads and flash translation algorithms, as will be shown in experiment results.","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115549538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Libra: Software-Controlled Cell Bit-Density to Balance Wear in NAND Flash 天平:软件控制单元位密度以平衡NAND闪存的磨损
ACM Trans. Embed. Comput. Syst. Pub Date : 2015-03-25 DOI: 10.1145/2638552
Xavier Jimenez, D. Novo, P. Ienne
{"title":"Libra: Software-Controlled Cell Bit-Density to Balance Wear in NAND Flash","authors":"Xavier Jimenez, D. Novo, P. Ienne","doi":"10.1145/2638552","DOIUrl":"https://doi.org/10.1145/2638552","url":null,"abstract":"Hybrid flash storages combine a small Single-Level Cell (SLC) partition with a large Multilevel Cell (MLC) partition. Compared to MLC-only solutions, the SLC partition exploits fast and short local write updates, while the MLC part brings large capacity. On the whole, hybrid storage achieves a tangible performance improvement for a moderate extra cost. Yet, device lifetime is an important aspect often overlooked: in a hybrid system, a large ratio of writes may be directed to the small SLC partition, thus generating a local stress that could exhaust the SLC lifetime significantly sooner than the MLC partition's. To address this issue, we propose Libra, which builds on flash storage made solely of MLC flash and uses the memory devices in SLC mode when appropriate; that is, we exploit the fact that writing a single bit per cell in an MLC provides characteristics close to those of an ordinary SLC. In our scheme, the cell bit-density of a block can be decided dynamically by the flash controller, and the physical location of the SLC partition can now be moved around the whole device, balancing wear across it. This article provides a thorough analysis and characterization of the SLC mode for MLCs and gives evidence that the inherent flexibility provided by Libra simplifies considerably the stress balance on the device. Overall, our technique improves lifetime by up to one order of magnitude at no cost when compared to any hybrid storage that relies on a static SLC-MLC partitioning.","PeriodicalId":183677,"journal":{"name":"ACM Trans. Embed. Comput. Syst.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116804026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信