IEEE Embedded Systems Letters最新文献

筛选
英文 中文
Hiding in Plain Sight: Reframing Hardware Trojan Benchmarking as a Hide&Seek Modification 隐藏在显而易见的地方:重新构建硬件木马基准测试作为一个捉迷藏的修改
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-12-05 DOI: 10.1109/LES.2024.3443155
Amin Sarihi;Ahmad Patooghy;Peter Jamieson;Abdel-Hameed A. Badawy
{"title":"Hiding in Plain Sight: Reframing Hardware Trojan Benchmarking as a Hide&Seek Modification","authors":"Amin Sarihi;Ahmad Patooghy;Peter Jamieson;Abdel-Hameed A. Badawy","doi":"10.1109/LES.2024.3443155","DOIUrl":"https://doi.org/10.1109/LES.2024.3443155","url":null,"abstract":"This letter focuses on advancing security research in the hardware design space by formally defining the realistic problem of hardware Trojan (HT) detection. The goal is to model HT detection more closely to the real world, i.e., describing the problem as “The Seeker’s Dilemma” where a detecting agent is unaware of whether circuits are infected by HTs or not. Using this theoretical problem formulation, we create a benchmark that consists of a mixture of HT-free and HT-infected restructured circuits while preserving their original functionalities. The restructured circuits are randomly infected by HTs, causing a situation where the defender is uncertain if a circuit is infected or not. We believe that our innovative benchmark and methodology of creating benchmarks will help the community judge the detection quality of different methods by comparing their success rates in circuit classification. We use our developed benchmark to evaluate three state-of-the-art HT detection tools to show baseline results for this approach. We use principal component analysis to assess the strength of our benchmark, where we observe that some restructured HT-infected circuits are mapped closely to HT-free circuits, leading to significant label misclassification by detectors.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"16 4","pages":"361-364"},"PeriodicalIF":1.7,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142789108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ViTSen: Bridging Vision Transformers and Edge Computing With Advanced In/Near-Sensor Processing ViTSen:桥接视觉变压器和边缘计算与先进的内/近传感器处理
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-12-05 DOI: 10.1109/LES.2024.3449240
Sepehr Tabrizchi;Brendan C. Reidy;Deniz Najafi;Shaahin Angizi;Ramtin Zand;Arman Roohi
{"title":"ViTSen: Bridging Vision Transformers and Edge Computing With Advanced In/Near-Sensor Processing","authors":"Sepehr Tabrizchi;Brendan C. Reidy;Deniz Najafi;Shaahin Angizi;Ramtin Zand;Arman Roohi","doi":"10.1109/LES.2024.3449240","DOIUrl":"https://doi.org/10.1109/LES.2024.3449240","url":null,"abstract":"This letter introduces \u0000<monospace>ViTSen</monospace>\u0000, optimizing vision transformers (ViTs) for resource-constrained edge devices. It features an in-sensor image compression technique to reduce data conversion and transmission power costs effectively. Further, \u0000<monospace>ViTSen</monospace>\u0000 incorporates a ReRAM array, allowing efficient near-sensor analog convolution. This integration, novel pixel reading, and peripheral circuitry decrease the reliance on analog buffers and converters, significantly lowering power consumption. To make ViTSen compatible, several established ViT algorithms have undergone quantization and channel reduction. Circuit-to-application co-simulation results show that \u0000<monospace>ViTSen</monospace>\u0000 maintains accuracy comparable to a full-precision baseline across various data precisions, achieving an efficiency of ~3.1 TOp/s/W.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"16 4","pages":"341-344"},"PeriodicalIF":1.7,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142788995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpiKernel: A Kernel Size Exploration Methodology for Improving Accuracy of the Embedded Spiking Neural Network Systems SpiKernel:一种用于提高嵌入式尖峰神经网络系统精度的核尺寸探索方法
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-12-02 DOI: 10.1109/LES.2024.3510197
Rachmad Vidya Wicaksana Putra;Muhammad Shafique
{"title":"SpiKernel: A Kernel Size Exploration Methodology for Improving Accuracy of the Embedded Spiking Neural Network Systems","authors":"Rachmad Vidya Wicaksana Putra;Muhammad Shafique","doi":"10.1109/LES.2024.3510197","DOIUrl":"https://doi.org/10.1109/LES.2024.3510197","url":null,"abstract":"Spiking neural networks (SNNs) can offer ultralow power/energy consumption for machine learning-based application tasks due to their sparse spike-based operations. Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy, which is not suitable for resource-constrained embedded applications. Therefore, developing SNNs that can achieve high accuracy with acceptable memory footprint is highly needed. Toward this, we propose SpiKernel, a novel methodology that improves the accuracy of SNNs through kernel size exploration. Its key steps include: 1) investigating the impact of different kernel sizes on the accuracy; 2) devising new sets of kernel sizes; 3) generating SNN architectures using neural architecture search based on the selected kernel sizes; and 4) analyzing the accuracy-memory tradeoffs for SNN model selection. The experimental results show that our SpiKernel achieves higher accuracy than state-of-the-art works (i.e., 93.24% for CIFAR10, 70.84% for CIFAR100, and 62% for TinyImageNet) with less than 10 M parameters and up to <inline-formula> <tex-math>$4.8times $ </tex-math></inline-formula> speed-up of searching time, thereby making it suitable for embedded applications.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 3","pages":"151-155"},"PeriodicalIF":1.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144272966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Efficient Approximate Subtractors for Change Detection Applications 变化检测应用的高效近似减法
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-12-02 DOI: 10.1109/LES.2024.3510416
Fatemeh Pooladi;Farshad Pesaran;Nabiollah Shiri
{"title":"High-Efficient Approximate Subtractors for Change Detection Applications","authors":"Fatemeh Pooladi;Farshad Pesaran;Nabiollah Shiri","doi":"10.1109/LES.2024.3510416","DOIUrl":"https://doi.org/10.1109/LES.2024.3510416","url":null,"abstract":"Approximate full subtractors (FSs) are used in unsigned restoring dividers (Drs) for change detection in image processing. In this letter, two new single-bit approximate FSs with eight and six transistors are presented which first circuit uses XNOR-F2 gates while the second one is designed based on multiplexer-F2 gates. The circuits are named XNF2 and MF2. The designs are implemented by the gate diffusion input (GDI) technique and 32-nm carbon nanotube field-effect transistor (CNTFET) technology. The XNF2 improves the delay and area with a favorable error rate (ER), with only two errors. The MF2 creates a suitable tradeoff between power, area, and accuracy and free of error distance (ED) > 1 that leads to a mean relative ED (MRED) < 0.375 and only 3 errors. Different figures of merits (FoMs) confirm the reliability of the cells for error-resilient applications, like change detection.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 3","pages":"168-171"},"PeriodicalIF":1.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144272965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TENDRA: Targeted Endurance Attack on STT-RAM LLC 对STT-RAM有限责任公司的目标持久攻击
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-11-19 DOI: 10.1109/LES.2024.3502297
Prabuddha Sinha;Mangena Likhit Sai;Shirshendu Das;Venkata Kalyan Tavva
{"title":"TENDRA: Targeted Endurance Attack on STT-RAM LLC","authors":"Prabuddha Sinha;Mangena Likhit Sai;Shirshendu Das;Venkata Kalyan Tavva","doi":"10.1109/LES.2024.3502297","DOIUrl":"https://doi.org/10.1109/LES.2024.3502297","url":null,"abstract":"Spin transfer torque RAM (STT-RAM)-based last level cache (LLC) offers significant benefits like high density and low refresh energy, but faces challenges like high write latency and limited endurance. Malicious attacks in a multicore setup need access to only a single core to perform repeated attacks on specific memory locations that can lead to an accelerated lifetime degradation of the STT-RAM LLC cells. To highlight this vulnerability of STT-RAM LLC, we propose two variations of TENDRA (targeted endurance attack), namely, recurring location attack (RLA) and recurring toggle attack (RTA). Our work highlights the efficiency of these attacks on modern counter-based wear leveling techniques and also the effect of wear leveling on these attacks.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 3","pages":"172-175"},"PeriodicalIF":1.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144272702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPGA Implementation of an Image Classifier Using Pipelined FFT Architecture 基于流水线FFT架构的图像分类器的FPGA实现
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-11-18 DOI: 10.1109/LES.2024.3500020
Shafiqul Hai;Tella Rajashekhar Reddy
{"title":"FPGA Implementation of an Image Classifier Using Pipelined FFT Architecture","authors":"Shafiqul Hai;Tella Rajashekhar Reddy","doi":"10.1109/LES.2024.3500020","DOIUrl":"https://doi.org/10.1109/LES.2024.3500020","url":null,"abstract":"Deep neural network (DNN) belongs to an important class of machine learning algorithms generally used to classify digital data in the form of image and speech recognition. The computational complexity of a DNN-based image classifier is higher than traditional fully connected (FC) feed-forward NNs. Therefore, dedicated cloud servers and graphical processor units (GPUs) are utilized to achieve high-speed and large-capacity computation tasks in machine vision systems. However, a growing demand exists for real-time processing of complex machine-learning tasks on embedded systems. As FC layers consume the highest fraction of computational power and memory footprint, innovating novel power-efficient and low-footprint NN architecture for embedded systems is crucial. In this letter, a pipelined and parallel fast Fourier transform (FFT)-based FC-DNN architecture is implemented on Stratix-10 FPGA using VHDL. The footprint of the DNN is further reduced using a folded FFT network. The proposed algorithm is tested using two benchmark training set examples, the MNIST database of handwritten digits and the CIFAR-10 database. In both cases, we achieve <inline-formula> <tex-math>${gt }~90$ </tex-math></inline-formula>% accuracy, while the power consumption of the 2-parallel folded FFT-based network is around 45% less than the traditional series FFT-based architectures.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 3","pages":"188-191"},"PeriodicalIF":1.7,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144272960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TLP Balancer: Predictive Thread Allocation for Multitenant Inference in Embedded GPUs TLP平衡器:嵌入式gpu中多租户推理的预测线程分配
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-11-14 DOI: 10.1109/LES.2024.3497587
Minseong Gil;Jaebeom Jeon;Junsu Kim;Sangun Choi;Gunjae Koo;Myung Kuk Yoon;Yunho Oh
{"title":"TLP Balancer: Predictive Thread Allocation for Multitenant Inference in Embedded GPUs","authors":"Minseong Gil;Jaebeom Jeon;Junsu Kim;Sangun Choi;Gunjae Koo;Myung Kuk Yoon;Yunho Oh","doi":"10.1109/LES.2024.3497587","DOIUrl":"https://doi.org/10.1109/LES.2024.3497587","url":null,"abstract":"This letter introduces a novel software technique to optimize thread allocation for merged and fused kernels in multitenant inference systems on embedded graphics processing units (GPUs). Embedded systems equipped with GPUs face challenges in managing diverse deep learning workloads while adhering to quality-of-service (QoS) standards, primarily due to limited hardware resources and the varied nature of deep learning models. Prior work has relied on static thread allocation strategies, often leading to suboptimal hardware utilization. To address these challenges, we propose a new software technique called thread-level parallelism (TLP) Balancer. TLP Balancer automatically identifies the best-performing number of threads based on performance modeling. This approach significantly enhances hardware utilization and ensures QoS compliance, outperforming traditional fixed-thread allocation methods. Our evaluation shows that TLP Balancer improves throughput by 40% compared to the state-of-the-art automated kernel merge and fusion techniques.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 3","pages":"180-183"},"PeriodicalIF":1.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144272690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lustre, Fast First, and Fresh 光泽,快第一,新鲜
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-11-14 DOI: 10.1109/LES.2024.3498932
Timothy Bourke;Marc Pouzet
{"title":"Lustre, Fast First, and Fresh","authors":"Timothy Bourke;Marc Pouzet","doi":"10.1109/LES.2024.3498932","DOIUrl":"https://doi.org/10.1109/LES.2024.3498932","url":null,"abstract":"The rate-synchronous model formalizes an industrial approach for composing Lustre nodes that execute at different rates. Such programs are compiled to cyclic sequential code in two steps. First, an integer linear program is solved to assign each component to a phase relative to its period. Second, the corresponding step functions are ordered for execution within a cycle of the generated code. By default, programs are deterministic: for any valid schedule, the generated code calculates the values decreed by the source dataflow semantics at the specified rates. In practice, though, specifying precise values in the source program is sometimes unnecessary, impracticable, and overly constraining. In this case, the integer linear programming constraints can be relaxed, though not necessarily completely, and their solution decides which dataflow semantics applies. Care is still required to ensure that code generation remains deterministic.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 2","pages":"119-122"},"PeriodicalIF":1.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143845410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Energy-Accuracy Scalable Multimodal Cognitive Systems 面向能量-精度可扩展多模态认知系统
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-11-13 DOI: 10.1109/LES.2024.3497935
Soumendu Kumar Ghosh;Arghadip Das;Arnab Raha;Vijay Raghunathan
{"title":"Towards Energy-Accuracy Scalable Multimodal Cognitive Systems","authors":"Soumendu Kumar Ghosh;Arghadip Das;Arnab Raha;Vijay Raghunathan","doi":"10.1109/LES.2024.3497935","DOIUrl":"https://doi.org/10.1109/LES.2024.3497935","url":null,"abstract":"Transformer-powered multimodal artificial intelligence (MMAI) holds great promise for developing cognitive systems that can analyze and interpret data from various sensory modalities simultaneously. However, deploying MMAI on resource-constrained “edge” platforms poses significant challenges due to the intensive compute and memory requirements of transformer models, communication bandwidth limitations, real-time processing needs, and the intricacies of multimodal data fusion. To overcome these challenges, we introduce collaborative multimodal inference, leveraging the strengths of MMAI, edge computing, and cloud resources. Our solution introduces modality-aware accuracy-efficiency (AE) knobs, extending beyond multimodal sensors to individual subsystems within the edge system. We explore intersubsystem and intermodal interactions, investigating system-level AE tradeoffs in the presence of synergistic optimizations. Building on these insights, we present <monospace>SysteMMX</monospace>, the first AE scalable cognitive system for efficient multimodal inference at the edge. In this letter, we present an in-depth case study centered around a multimodal system employing RGB and depth sensors for image segmentation. Our system, <monospace>SysteMMX</monospace>, demonstrates significant energy savings—<inline-formula> <tex-math>${1.8 times }$ </tex-math></inline-formula> on the edge device and <inline-formula> <tex-math>${1.7times }$ </tex-math></inline-formula> on the edge server—with an imperceptible application-level accuracy loss of less than 0.01%. Furthermore, <monospace>SysteMMX</monospace> outperforms single-modality optimizations, achieving <inline-formula> <tex-math>${1.2times }$ </tex-math></inline-formula> and <inline-formula> <tex-math>${1.8times }$ </tex-math></inline-formula> more energy efficiency on the edge compared to RGB-only and Depth-only approaches, respectively, for similar levels of accuracy loss.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 3","pages":"156-159"},"PeriodicalIF":1.7,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144272958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigation of Security Vulnerabilities in NVM-Based Persistent TinyML Hardware 基于nvm的持久性TinyML硬件安全漏洞研究
IF 1.7 4区 计算机科学
IEEE Embedded Systems Letters Pub Date : 2024-11-12 DOI: 10.1109/LES.2024.3496508
Bhanprakash Goswami;Chithambara J. Moorthii;Harshit Bansal;Ayan Sajwan;Manan Suri
{"title":"Investigation of Security Vulnerabilities in NVM-Based Persistent TinyML Hardware","authors":"Bhanprakash Goswami;Chithambara J. Moorthii;Harshit Bansal;Ayan Sajwan;Manan Suri","doi":"10.1109/LES.2024.3496508","DOIUrl":"https://doi.org/10.1109/LES.2024.3496508","url":null,"abstract":"This study investigates vulnerabilities of future generation nonvolatile memory (NVM)-backed persistent TinyML hardware neural networks to side-channel attacks (SCAs) using electromagnetic (EM) analysis methods. We trained three different tinyML models: MobileNet, ResNet, and EfficientNet on three different standard datasets: F-MNIST, CIFAR-10, and MNIST. The trained networks were then mapped on to a custom FPGA-NVM setup for EM-SCA evaluation. We demonstrate that the information about the stored model parameters/weights can be extracted by applying statistical methods on the collected EM emanation data. Further, we demonstrate that the obtained model parametric information can be used for cloning some of the lightweight edge TinyML models with only 0.5%–10% of total training dataset.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 3","pages":"204-209"},"PeriodicalIF":1.7,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144272703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信