{"title":"A/D Alleviator: Reducing Analog-to-Digital Conversions in Compute-In-Memory with Augmented Analog Accumulation","authors":"Weidong Cao, Xuan Zhang","doi":"10.1109/ISCAS46773.2023.10181895","DOIUrl":null,"url":null,"abstract":"Compute-in-memory (CIM) has shown great promise in accelerating numerous deep-learning tasks. However, existing analog CIM (ACIM) accelerators often suffer from frequent and energy-intensive analog-to-digital (A/D) conversions, severely limiting their energy efficiency. This paper proposes A/D Alleviator, an energy-efficient augmented analog accumulation data flow to reduce A/D conversions in ACIM accelerators. To make it, switched-capacitor-based multiplication and accumulation circuits are used to connect the bitlines (BLs) of memory crossbar arrays and the final A/D conversion stage. In this way, analog partial sums can be accumulated both spatially across all adjacent BLs that store high-precision weights and temporarily across all input cycles before the final quantization, thereby minimizing the need for explicit A/D conversions. Evaluations demonstrate that A/D Alleviator can improve energy efficiency by 4.9× and 1.9× with a high signal-to-noise ratio, as compared to state-of-the-art ACIM accelerators.","PeriodicalId":177320,"journal":{"name":"2023 IEEE International Symposium on Circuits and Systems (ISCAS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Symposium on Circuits and Systems (ISCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCAS46773.2023.10181895","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Compute-in-memory (CIM) has shown great promise in accelerating numerous deep-learning tasks. However, existing analog CIM (ACIM) accelerators often suffer from frequent and energy-intensive analog-to-digital (A/D) conversions, severely limiting their energy efficiency. This paper proposes A/D Alleviator, an energy-efficient augmented analog accumulation data flow to reduce A/D conversions in ACIM accelerators. To make it, switched-capacitor-based multiplication and accumulation circuits are used to connect the bitlines (BLs) of memory crossbar arrays and the final A/D conversion stage. In this way, analog partial sums can be accumulated both spatially across all adjacent BLs that store high-precision weights and temporarily across all input cycles before the final quantization, thereby minimizing the need for explicit A/D conversions. Evaluations demonstrate that A/D Alleviator can improve energy efficiency by 4.9× and 1.9× with a high signal-to-noise ratio, as compared to state-of-the-art ACIM accelerators.