Yongliang Zhou, Zuo Cheng, Han Liu, Tianzhu Xiong, Bo Wang
{"title":"A 22-nm FDSOI 8T SRAM Based Time-Domain CIM for Energy-Efficient DNN Accelerators","authors":"Yongliang Zhou, Zuo Cheng, Han Liu, Tianzhu Xiong, Bo Wang","doi":"10.1109/APCCAS55924.2022.10090315","DOIUrl":null,"url":null,"abstract":"In memory computation for Deep neural networks (DNNs) applications is an attractive approach to improve the energy efficiency of MAC operations under a memory-wall constraint, since it is highly parallel and can save a great amount of computation and memory access power. In this paper, we propose a time-domain compute in memory (CIM) design based on Fully Depleted Silicon On Insulator (FD-SOI) 8T SRAM. A $128\\mathrm{x}128$ 8T SRAM bit-cell array is built for processing a vector-matrix multiplication (or parallel dot-products) with $8\\mathrm{x}$ binary (0 or 1) inputs, in-array 8-bits weights, and 8bits output precision for DNN applications. The column-wise TDC converts the delay accumulation results to 8bits output codes using replica bit-cells for each conversion. Monte-Carlo simulations have verified both linearity and process variation. The energy efficiency of the 8bits operation is 32.8TOPS/W at 8bits TDC mode using 0.9V supply and 20MHz.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"406 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APCCAS55924.2022.10090315","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In memory computation for Deep neural networks (DNNs) applications is an attractive approach to improve the energy efficiency of MAC operations under a memory-wall constraint, since it is highly parallel and can save a great amount of computation and memory access power. In this paper, we propose a time-domain compute in memory (CIM) design based on Fully Depleted Silicon On Insulator (FD-SOI) 8T SRAM. A $128\mathrm{x}128$ 8T SRAM bit-cell array is built for processing a vector-matrix multiplication (or parallel dot-products) with $8\mathrm{x}$ binary (0 or 1) inputs, in-array 8-bits weights, and 8bits output precision for DNN applications. The column-wise TDC converts the delay accumulation results to 8bits output codes using replica bit-cells for each conversion. Monte-Carlo simulations have verified both linearity and process variation. The energy efficiency of the 8bits operation is 32.8TOPS/W at 8bits TDC mode using 0.9V supply and 20MHz.