2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)最新文献

筛选
英文 中文
Toward A Real-Time Elliptic Curve Cryptography-Based Facial Security System 基于实时椭圆曲线密码的面部安全系统研究
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090407
T. Tan, Hanho Lee
{"title":"Toward A Real-Time Elliptic Curve Cryptography-Based Facial Security System","authors":"T. Tan, Hanho Lee","doi":"10.1109/APCCAS55924.2022.10090407","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090407","url":null,"abstract":"This paper presents a novel approach for a facial security system using elliptic curve cryptography. Face images extracted from input video are encrypted before sending to a remote server. The input face images are completely encrypted by mapping each pixel value of the detected face from the input video frame to a point on an elliptic curve. The original image can be recovered when needed using the elliptic curve cryptography decryption function. Specifically, we modify point multiplication designed for projective coordinates and apply the modified approach in affine coordinates to speed up scalar point multiplication operation. Image encryption and decryption operations are also facilitated using our existing scheme. Simulation results on Visual Studio demonstrate that the proposed systems help accelerate encryption and decryption operations while maintaining information confidentiality.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117069174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Energy Efficient Precision Scalable Computation Array for Neural Radiance Field Accelerator 一种用于神经辐射场加速器的高能效精确可扩展计算阵列
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090268
Chaolin Rao, Haochuan Wan, Yueyang Zheng, Pingqiang Zhou, Xin Lou
{"title":"An Energy Efficient Precision Scalable Computation Array for Neural Radiance Field Accelerator","authors":"Chaolin Rao, Haochuan Wan, Yueyang Zheng, Pingqiang Zhou, Xin Lou","doi":"10.1109/APCCAS55924.2022.10090268","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090268","url":null,"abstract":"Neural Radiance Field (NeRF), a recent advance in neural rendering, demonstrates impressive results for photo-realistic novel view synthesis. However, it faces challenges for deployment in practical rendering applications due to the large amount of multiply-accumulate (MAC) operations. For hardware accelerator design, precision-scalable MAC array, which can support computations with various precision can be used to optimize the power consumption of NeRF rendering accelerators. Recently, a variety of precision-scalable MAC arrays have been proposed to reduce the computational complexity of Convolutional Neural Networks (CNN). However, most of them require a lot of control logic to support different levels of precision. This paper proposes a precision-scalable MAC array with serial mode, which can support the multiplication with different precision of weight in multiple cycles with little overhead. Implementation results show that the energy efficiency of the proposed MAC array is about 14.54 TOPS/W and 4.83 TOPS/W for 4-bit and 8-bit computation modes, superior to other existing precision-scalable solutions.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115520752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Evasive Path Planning with Velocity Constraint 速度约束下的最优避障路径规划
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090375
Karnika Biswas, Hakim Ghazzai, I. Kar, Y. Massoud
{"title":"Optimal Evasive Path Planning with Velocity Constraint","authors":"Karnika Biswas, Hakim Ghazzai, I. Kar, Y. Massoud","doi":"10.1109/APCCAS55924.2022.10090375","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090375","url":null,"abstract":"Pursuit evasion is an important category of mobile robotics application related to surveillance, spying and gathering ambient information. This paper presents a novel optimal approach to evasion planning, considering physical limitations of the environment and the evader. The results show that the proposed formulation is applicable irrespective of the number of pursuing agents and the relative velocities of the pursuers and the evader, contrary to the traditional requirement that evasion strategies need to be configured according to situation-dependent cases. The proposed policy is generic and can be implemented in real-time by iterative optimization using model predictive controllers, the objective being avoidance of capture or at the least, maximizing the capture time.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114216132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Quantization Model Based on a Floating-point Computing-in-Memory Architecture 基于浮点内存计算体系结构的量化模型
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090283
X. Chen, An Guo, Xinbing Xu, Xin Si, Jun Yang
{"title":"A Quantization Model Based on a Floating-point Computing-in-Memory Architecture","authors":"X. Chen, An Guo, Xinbing Xu, Xin Si, Jun Yang","doi":"10.1109/APCCAS55924.2022.10090283","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090283","url":null,"abstract":"Computing-in-memory (CIM) has been proved to perform high energy efficiency and significant acceleration effect for high computational parallelism neural networks. Floating-point numbers and floating-point CIMs (FP-CIM) are required to execute high performance training and high accuracy inference for neural networks. However, none of former works discuss the relationship between circuit design based on the FP-CIM architecture and neural networks. In this paper, we propose a quantization model based on a FP-CIM architecture to figure out this relationship in PYTORCH. According to experimental results we summarize some principles on FP-CIM macro design. Using our quantization model can reduce data storage overhead by more than 70.0%, and control floating-point networks inference accuracy loss within 0.5%, which is 1.7% better than integer networks.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114237678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An 8-T Processing-in-Memory SRAM Cell-Based Pixel-Parallel Array Processor for Vision Chips 一种用于视觉芯片的8t内存SRAM单元像素并行阵列处理器
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090359
Leyi Chen, Junxian He, Jianyi Yu, Haibing Wang, Jing Lu, Liyuan Liu, N. Wu, Cong Shi, Tian Min
{"title":"An 8-T Processing-in-Memory SRAM Cell-Based Pixel-Parallel Array Processor for Vision Chips","authors":"Leyi Chen, Junxian He, Jianyi Yu, Haibing Wang, Jing Lu, Liyuan Liu, N. Wu, Cong Shi, Tian Min","doi":"10.1109/APCCAS55924.2022.10090359","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090359","url":null,"abstract":"Vision chip is a high-speed image processing device, featuring a massively-parallel pixel-level processing element (PE) array to boost pixel processing speed. However, the collocated processing unit and fine-grained data memory unit inside each PE impose a huge requirement on memory access bandwidth as well as big area and energy consumption. To overcome this bottleneck, this paper proposes a full custom $mathbf{8T}$ SRAM-based Processing-in-memory (PIM) architecture to realize pixel-parallel array processor for high-speed energy-efficient vision chips. The proposed PIM architecture is constructed by emending multiplexer-based computing circuits into a dual port 8T SRAM array, so as to form a PIM PE array. Each PIM PE holds a 66-bit 8T SRAM cell block embedding in-memory logic functions, of which 64-bit 8T SRAM cells serving as the PE memory, 2-bit 8T SRAM cells acting as a buffer register in the PE. A full custom physical layout of a 16 $times boldsymbol{16}$ prototyping PIM PE array is designed and simulated using a 65 nm CMOS technology. The simulation results demonstrate that our proposed PIM PE architecture can achieve 200 MHz operation at 1.2 V, and reach a high energy efficiency of 3.97 TOPS/W while keeping a compact area of 0.129 $mathbf{mm}^{2}$.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114240781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RRAM Computing-in-Memory Using Transient Charge Transferring for Low-Power and Small-Latency AI Edge Inference 基于瞬态电荷转移的低功耗小延迟AI边缘推断的内存中RRAM计算
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090254
Linfang Wang, Junjie An, Wang Ye, Weizeng Li, Hanghang Gao, Yangu He, Jianfeng Gao, Jinshan Yue, Lingyan Fan, C. Dou
{"title":"RRAM Computing-in-Memory Using Transient Charge Transferring for Low-Power and Small-Latency AI Edge Inference","authors":"Linfang Wang, Junjie An, Wang Ye, Weizeng Li, Hanghang Gao, Yangu He, Jianfeng Gao, Jinshan Yue, Lingyan Fan, C. Dou","doi":"10.1109/APCCAS55924.2022.10090254","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090254","url":null,"abstract":"RRAM-based computing-in-memory (CIM) can potentially improve the energy- and area-efficiency for AI edge processors, yet may still suffer from performance degradations due to the large DC current and parasitic capacitance in the cell array during computation. In this work, we propose a new CIM design leveraging the transient-charge-transferring (TCT) between the parasitic capacitors in the high-density foundry-compatible two-transistor-two-resistor (2T2R) RRAM array, which can perform DC-current-free multiply-and-accumulate (MAC) operations with improved energy-efficiency, reduced latency and enhanced signal margin. The concept of TCT-CIM is silicon demonstrated using a 180nm 400Kb RRAM test-chip, which has achieved 7.36 times power reduction compared to the conventional scheme and measured read access time less than 17.22 ns.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121851316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Power-Aware ECG Transmission Framework with Server Aided Lossless Compression 基于服务器辅助无损压缩的功率感知心电传输框架
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090374
Jitumani Sarma, Rakesh Biswas
{"title":"A Power-Aware ECG Transmission Framework with Server Aided Lossless Compression","authors":"Jitumani Sarma, Rakesh Biswas","doi":"10.1109/APCCAS55924.2022.10090374","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090374","url":null,"abstract":"Wearable sensor nodes based WBAN system is utilized to reduce individuals' life risk by detecting various cardiac anomalies via remote ECG signal monitoring. In this context, a power-aware WBAN transmission system through a server-aided ECG compression technique is presented in this paper. For that, a lossless compression technique to deal with the power consumption issue of a sensor node is proposed. The proposed compression approach employs a frame-adaptive Golomb-rice coding in coordination with k-means clustering at the remote server. The proposed algorithm effectively achieves a similar compression ratio under different levels of noise incorporated in the digitized ECG signal. The algorithm is validated with ECG signals from the MIT-BIH arrhythmia database, resulting in an average compression ratio of 2.89. The VLSI architecture of the proposed technique is implemented on TSMC 90 nm technology, which consumes a power of 65 $mu W$ with 0.0049 $mm^{2}$ area overhead.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125523466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 55nm 32Mb Digital Flash CIM Using Compressed LUT Multiplier and Low Power WL Voltage Trimming Scheme for AI Edge Inference 采用压缩LUT乘法器和低功耗WL电压微调方案的55nm 32Mb数字Flash CIM用于AI边缘推断
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090358
Hongyang Hu, Zi Wang, Xiaoxin Xu, K. Xi, Kun Zhang, Junyu Zhang, C. Dou
{"title":"A 55nm 32Mb Digital Flash CIM Using Compressed LUT Multiplier and Low Power WL Voltage Trimming Scheme for AI Edge Inference","authors":"Hongyang Hu, Zi Wang, Xiaoxin Xu, K. Xi, Kun Zhang, Junyu Zhang, C. Dou","doi":"10.1109/APCCAS55924.2022.10090358","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090358","url":null,"abstract":"In this work, we proposed a digital flash computing-in-memory (CIM) architecture using compressed lookup-table multiplier (CLUTM) and low power word-line voltage trimming (LP-WLVT) schemes. The proposed concept is highly compatible to the standard commodity NOR flash memory. Compared to the conventional lookup-table (LUT) multipliers, CLUTM results in 32 times reduction on the area cost in the case of 8-bit multiplication. The LP-WLVT scheme can further reduce the inference power by 14%. The concept is silicon demonstrated in a 55nm 32Mb commercial flash memory, which can perform 8-bit multiply-and-accumulate (MAC) with a throughput of 51.2 GOPs. It provides 1.778ms frame shift when running TC-resnet8 network, which is $5 times$ more efficient than the previous works. The CLUTM-based digital CIM architecture can play an important role to enable commercial flash for highly-efficient AI edge inference.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131165078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A 27–29.5GHz 6-Bit Phase Shifter with 0.67 −1.5 degrees RMS Phase Error in 65nm CMOS 一种27-29.5GHz 6位移相器,相位误差RMS为0.67 ~ 1.5度
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090332
Qin Duan, Zhijian Chen, Feng-yuan Mao, Y. Zou, Bin Li, Guangyin Feng, Yanjie Wang, Xiao-Ling Lin
{"title":"A 27–29.5GHz 6-Bit Phase Shifter with 0.67 −1.5 degrees RMS Phase Error in 65nm CMOS","authors":"Qin Duan, Zhijian Chen, Feng-yuan Mao, Y. Zou, Bin Li, Guangyin Feng, Yanjie Wang, Xiao-Ling Lin","doi":"10.1109/APCCAS55924.2022.10090332","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090332","url":null,"abstract":"A 27-29.5GHz 6-bit switch-type phase shifter (PS) using 65nm CMOS process is presented in this paper. The PS incorporates 6 series phase shift bits to realize the relative phase shift varying from 0° to 354.375° with a step of 5.625°. Novel design approaches for phase shift bit and bits cascading sequence are proposed to improve the bandwidth and the RMS phase error. The post-layout simulation results show that the PS exhibits an ultra-low RMS phase error of 0.67°-1.5° and RMS gain error of 0.63dB-0.8dB from 27GHz to 29.5GHz. The input and output return loss are both better than −10dB and the core size iS $0.90times 0.35text{mm}^{2}$.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Complexity Dynamic Single-Minimum Min-Sum Algorithm and Hardware Implementation for LDPC Codes LDPC码的低复杂度动态单最小最小和算法及硬件实现
2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS) Pub Date : 2022-11-11 DOI: 10.1109/APCCAS55924.2022.10090379
Qinyuan Zhang, Suwen Song, Zhongfeng Wang
{"title":"Low-Complexity Dynamic Single-Minimum Min-Sum Algorithm and Hardware Implementation for LDPC Codes","authors":"Qinyuan Zhang, Suwen Song, Zhongfeng Wang","doi":"10.1109/APCCAS55924.2022.10090379","DOIUrl":"https://doi.org/10.1109/APCCAS55924.2022.10090379","url":null,"abstract":"As a type of low-complexity decoding algorithm for low-density parity-check (LDPC) codes, the single-minimum min-sum (smMS) algorithm avoids finding the second minimum, while estimates it by adding a fixed value to the minimum instead. However, the inaccurate estimation of the sub-minimum results in obvious performance degradation. In this work, we propose an improved smMS algorithm, which adds a dynamic value to the minimum based on a special variable that can be easily computed and largely represents the convergence degree of iterative decoding. This new algorithm is thus called dynamic smMS (dsmMS) algorithm. In comparison to the standard normalized min-sum (NMS) algorithm, the performance gap for LDPC code (672,588) is narrowed from 0.55 dB of the smMS to 0.12 dB of the dsmMS. We also present a partially parallel decoding architecture for the dsmMS algorithm, and implement it under 55nm CMOS technology with an area of 0.21 mm2, Furthermore, compared with the traditional NMS decoder, the proposed design can reduce the area of the total decoder by 22%.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129619050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信