2020 IEEE 33rd International System-on-Chip Conference (SOCC)最新文献

筛选
英文 中文
An Energy-Efficient Low Power LSTM Processor for Human Activity Monitoring 一种用于人体活动监测的节能低功耗LSTM处理器
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524796
A. Mazumder, Hasib-Al Rashid, T. Mohsenin
{"title":"An Energy-Efficient Low Power LSTM Processor for Human Activity Monitoring","authors":"A. Mazumder, Hasib-Al Rashid, T. Mohsenin","doi":"10.1109/socc49529.2020.9524796","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524796","url":null,"abstract":"A low complexity Long Short-Term Memory (LSTM) based neural network architecture is proposed in this paper for the classification task of recognizing different human activities in relation to various sensor modalities. The proposed model consists of one LSTM layer of 8 units, two dense layers having 80 and 32 neurons respectively and one output layer with 13 neurons for multi-class classification. We achieved 87.17 % classification accuracy with our proposed model to classify 12 activities from each other. The proposed work involves extensive hyperparameter optimization in order to develop a hardware implementable model architecture while also maintaining high classification accuracy. In this case, quantization allowed the model to have a small size of 365 kB which resulted in 2x improvement over the 16-bit precision. The hardware architecture is designed in a parameterized way with respect to the number of input channels, filters, and data width to give more flexibility in terms of reconfigurability. The proposed LSTM based model is fully synthesized and placed-and-routed on Xilinx Artix-7 FPGA. Our reconfigurable hardware architecture consumes 82 mW power at an operating frequency of 160 MHz. Our LSTM based FPGA hardware achieves 7.7 GOP/s/W energy efficiency which outperforms previous hardware architecture implementations on Human Activity Recognition (HAR) by atleast 5.2×. The proposed low power LSTM processor also has an improvement of atleast 4.1 x for energy efficiency over previous LSTM works based on language modeling and artifact detection.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124492085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
End-to-end Scalable and Low Power Multi-modal CNN for Respiratory-related Symptoms Detection 端到端可扩展和低功耗多模态CNN用于呼吸相关症状检测
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524755
Haoran Ren, A. Mazumder, Hasib-Al Rashid, Vandana Chandrareddy, Aidin Shiri, N. Manjunath, T. Mohsenin
{"title":"End-to-end Scalable and Low Power Multi-modal CNN for Respiratory-related Symptoms Detection","authors":"Haoran Ren, A. Mazumder, Hasib-Al Rashid, Vandana Chandrareddy, Aidin Shiri, N. Manjunath, T. Mohsenin","doi":"10.1109/socc49529.2020.9524755","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524755","url":null,"abstract":"With the onset of the highly contagious COVID-19 pandemic, early-stage and clinic-independent machine assistance is essential for initial disease diagnosis based on its symptoms such as fever, dry cough, fatigue, and dyspnea. This paper proposes a scalable and low power architecture based on end-to-end Convolutional Neural Networks (CNN) for respiratory-related symptoms (cough and dyspnea) detection. The CNN-based model will be part of the final product running on general computing processors that can assess patients similar to what doctors do at triage and telemedicine using passively recorded audio and other information. The proposed model consists of 1D-convolutions to extract audio features and combinations of 2D-convolutions and fully-connected neurons for classification. The architecture achieves a detection accuracy of 87.5% for cough and 87.3% for dyspnea respectively. The proposed work involves extensive optimization of parameters in order to develop a model architecture that can be implemented on highly constrained power budget devices while maintaining high classification accuracy. This optimization allows us to achieve the model size of 960 KB for cough detection which is 193x smaller than the related works employing the end-to-end CNN architecture. The hardware architecture is designed to provide more versatility in terms of the number of input channels, filters, data width and processing engine (P.E.) in a parameterized manner with the target of proposing a reconfigurable hardware. The proposed architecture is fully synthesized and placed-and-routed on Xilinx Artix-7 FPGA. At 47.6 MHz operating frequency, our cough detection hardware architecture consumes 211 mW of power. On the other hand, dyspnea detection hardware architecture consumes 207 mW power at an operating frequency of 50 MHz. In addition, the proposed hardware architecture meets the latency deadline of 1s needed for the efficient operation of hardware while still being energy-effective compared to related work.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122429072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Mist-Scan: A Secure Scan Chain Architecture to Resist Scan-Based Attacks in Cryptographic Chips Mist-Scan:一种安全的扫描链架构,以抵抗加密芯片中基于扫描的攻击
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524759
Mohammad Taherifard, H. Beitollahi, Fateme Jamali, Amin Norollah, A. Patooghy
{"title":"Mist-Scan: A Secure Scan Chain Architecture to Resist Scan-Based Attacks in Cryptographic Chips","authors":"Mohammad Taherifard, H. Beitollahi, Fateme Jamali, Amin Norollah, A. Patooghy","doi":"10.1109/socc49529.2020.9524759","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524759","url":null,"abstract":"Since the advent of crypto-chips, side-channel attacks have been always serious threats to hardware cryptographic systems and chips. Side-channel attacks use implementation level weaknesses instead of computational weaknesses to break into cryptographic algorithms and discover the encryption key. Scan chain architecture, which is widely used to facilitate the chips' testability, introduces a new side-channel vulnerability to the chips (so-called scan-based attack). As, standard scan chain architectures e.g., JTAG, Nexus play a key role in the in-field test as well as the required hardware updates of chips, the solution of burning the scan chain out is not acceptable in most of the applications. In this paper, a method called Mist-scan is proposed and evaluated that detects and prevents applied scan-based attacks in the runtime. Using a user authorization mechanism, the chip will work normally for authorized users and output the corresponding responses. However, in the case of non-authorized users, fake outputs will be generated and the attackers access to sensitive information of the crypto-chip will be prevented in a cycle. Simulation results on the DES encryption algorithm indicate that the proposed method retains the chips testability and prevents both differential and signature-based attacks perfectly. This comes at the negligible area cost of less than 0.8%.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124128004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Achieving Flexible, Low-Latency and 100Gbps Line-rate Load Balancing over Ethernet on FPGA 在FPGA上实现灵活、低延迟和100Gbps线速率的以太网负载均衡
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524774
Jinyu Xie, Wenbo Yin, Lingli Wang
{"title":"Achieving Flexible, Low-Latency and 100Gbps Line-rate Load Balancing over Ethernet on FPGA","authors":"Jinyu Xie, Wenbo Yin, Lingli Wang","doi":"10.1109/socc49529.2020.9524774","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524774","url":null,"abstract":"Load balancing is one of the most important network services in cloud data centers. However, traditional load balancers are gradually overstretched with the explosive growth of big data, whose latency and throughput are far from satisfying the performance requirements. Based on the high parallelism and flexibility of Field Programmable Gate Array (FPGA), this paper presents a load balancing scheme over Ethernet on an FPGA, which enables flexible load balancing strategies and achieves low-latency and high-throughput. Using parallelization and pipeline techniques, we skillfully implement multiple load balancing algorithms and server state monitoring for server clusters in the data center on FPGA. Besides, the FPGA Ethernet engine to significantly accelerate protocol offloading and packet processing is also designed in our system. We evaluate the system performance on a Xilinx Virtex UltraScale+ VCU118 board. The experimental results show that the system latency can achieve as low as 0.7µs, and the peak throughput is 100Gbps line-rate, which is at least 5.7x faster and 2.5x higher respectively than other hardware or software load balancers.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128060879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic Precision Multiplier For Deep Neural Network Accelerators 用于深度神经网络加速器的动态精度乘法器
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524752
Chen Ding, Y. Huan, Lirong Zheng, Z. Zou
{"title":"Dynamic Precision Multiplier For Deep Neural Network Accelerators","authors":"Chen Ding, Y. Huan, Lirong Zheng, Z. Zou","doi":"10.1109/socc49529.2020.9524752","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524752","url":null,"abstract":"The application of dynamic precision multipliers in the deep neural network accelerators can greatly improve system's data processing capacity under same memory bandwidth limitation. This paper presents a Dynamic Precision Multiplier (DPM) for deep learning accelerators to adapt to light-weight deep learning models with varied precision. The proposed DPM adopts Booth algorithm and Wallace Adder Tree to support parallel computation of signed/unsigned one 16-bit, two 8-bit or four 4-bit at run time. The DPM is further optimized with simplified partial product selection logic and mixed partial product selection structure techniques, reducing power cost for energy-efficient edge computing. The DPM is evaluated in both FPGA and ASIC flow, and the results show that 4-bit mode consumes the least energy among the three modes at 1.34pJ/word. It also saves nearly 22.38% and 232.17% of the power consumption under 16-bit and 8-bit mode respectively when comparing with previous similar designs.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128099793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning Acceleration using Digital-Based Processing In-Memory
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524776
M. Imani, Saransh Gupta, Yeseong Kim, T. Simunic
{"title":"Deep Learning Acceleration using Digital-Based Processing In-Memory","authors":"M. Imani, Saransh Gupta, Yeseong Kim, T. Simunic","doi":"10.1109/socc49529.2020.9524776","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524776","url":null,"abstract":"Processing In-Memory (PIM) has shown a great potential to accelerate inference tasks of Convolutional Neural Network (CNN). However, existing PIM architectures do not support high precision computation, e.g., in floating point precision, which is essential for training accurate CNN models. In addition, most of the existing PIM approaches require analog/mixed-signal circuits, which do not scale, exploiting insufficiently reliable multi-bit Non-Volatile Memory (NVM). In this paper, we propose FloatPIM, a fully-digital scalable PIM architecture that accelerates CNN in both training and testing phases. FloatPIM natively supports floating-point representation, thus enabling accurate CNN training. FloatPIM also enables fast communication between neighboring memory blocks to reduce internal data movement of the PIM architecture. We break the CNN computation into computing and data transfer modes. In computing mode, all blocks are processing a part of CNN training/testing in parallel, while in data transfer mode Float-PIM enables fast and row-parallel communication between the neighbor blocks. Our evaluation shows that FloatPIM training is on average 303.2 and 48.6 (4.3x and 15.8x) faster and more energy efficient as compared to GTX 1080 GPU (PipeLayer [1] PIM accelerator).","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132565012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Industry Forum on High Speed SOC Interconnects 高速SOC互连产业论坛
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524725
{"title":"Industry Forum on High Speed SOC Interconnects","authors":"","doi":"10.1109/socc49529.2020.9524725","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524725","url":null,"abstract":"","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133849818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HARDY: Hardware based Analysis for malwaRe Detection in embedded sYstems 嵌入式系统中基于硬件的恶意软件检测分析
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524727
S. Kadiyala, Mohit Garg, Manaar Alam, H. Ngo, Debdeep Mukhopadhyay, T. Srikanthan
{"title":"HARDY: Hardware based Analysis for malwaRe Detection in embedded sYstems","authors":"S. Kadiyala, Mohit Garg, Manaar Alam, H. Ngo, Debdeep Mukhopadhyay, T. Srikanthan","doi":"10.1109/socc49529.2020.9524727","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524727","url":null,"abstract":"Increase in number of embedded systems which are interconnected has created need for mechanisms which can detect malicious exploits in a lightweight yet speed efficient fashion. Recent approaches that addressed this challenge focused on utilizing either high level or low level features along with machine learning algorithms to analyze behavior of unknown programs. However, the high level mechanisms are less tamper resistant and low level approaches result in high false positive. Moreover these approaches often consume high computational and storage resources, which are less suitable for embedded systems. In this paper, we present a custom hardware realization of a lightweight malware analysis approach. This approach analyzes an unknown program using a judicious combination of high level and low level features along some with statistical methods. We achieve an average of 3.06x reduction in power consumption and an average of 2.52x improvement in detection speed, when compared to existing hardware-based malware detection techniques and also a speed up of 1.8x compared to its software based realization.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131097335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hardware Accelerator for Multi-Head Attention and Position-Wise Feed-Forward in the Transformer 变压器中多头注意和位置前馈的硬件加速器
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524802
Siyuan Lu, Meiqi Wang, Shuang Liang, Jun Lin, Zhongfeng Wang
{"title":"Hardware Accelerator for Multi-Head Attention and Position-Wise Feed-Forward in the Transformer","authors":"Siyuan Lu, Meiqi Wang, Shuang Liang, Jun Lin, Zhongfeng Wang","doi":"10.1109/socc49529.2020.9524802","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524802","url":null,"abstract":"Designing hardware accelerators for deep neural networks (DNNs) has been much desired. Nonetheless, most of these existing accelerators are built for either convolutional neural networks (CNNs) or recurrent neural networks (RNNs). Recently, the Transformer model is replacing the RNN in the natural language processing (NLP) area. However, because of intensive matrix computations and complicated data flow being involved, the hardware design for the Transformer model has never been reported. In this paper, we propose the first hardware accelerator for two key components, i.e., the multi-head attention (MHA) ResBlock and the position-wise feed-forward network (FFN) ResBlock, which are the two most complex layers in the Transformer. Firstly, an efficient method is introduced to partition the huge matrices in the Transformer, allowing the two ResBlocks to share most of the hardware resources. Secondly, the computation flow is well designed to ensure the high hardware utilization of the systolic array, which is the biggest module in our design. Thirdly, complicated nonlinear functions are highly optimized to further reduce the hardware complexity and also the latency of the entire system. Our design is coded using hardware description language (HDL) and evaluated on a Xilinx FPGA. Compared with the implementation on GPU with the same setting, the proposed design demonstrates a speed-up of 14.6 x in the MHA ResBlock, and 3.4 x in the FFN ResBlock, respectively. Therefore, this work lays a good foundation for building efficient hardware accelerators for multiple Transformer networks.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123916652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Switched Capacitor Based Area Efficient Voltage Quadruple for High Pumping Efficiency 基于开关电容的区域高效电压四倍高抽运效率
2020 IEEE 33rd International System-on-Chip Conference (SOCC) Pub Date : 2020-09-08 DOI: 10.1109/socc49529.2020.9524804
V. Rana, Shivam Kalla
{"title":"Switched Capacitor Based Area Efficient Voltage Quadruple for High Pumping Efficiency","authors":"V. Rana, Shivam Kalla","doi":"10.1109/socc49529.2020.9524804","DOIUrl":"https://doi.org/10.1109/socc49529.2020.9524804","url":null,"abstract":"This paper proposes a design of switched capacitor-based voltage quadruple circuit that can be used to generate on-chip high voltage. The circuit consist of two-phase clock signals, flying capacitors and Charge Transfer Switches (CTS). Proposed circuit uses 30% less flying capacitor as compare to conventional architectures. This circuit shows the voltage efficiency of 95% for no current load condition and uses total 0.015 mm2chip area. Circuit is design and implemented using in 90nm triple well technology using 5V capable transistors.","PeriodicalId":114740,"journal":{"name":"2020 IEEE 33rd International System-on-Chip Conference (SOCC)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122559043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信