A 22-nm FDSOI 8T SRAM Based Time-Domain CIM for Energy-Efficient DNN Accelerators

Yongliang Zhou, Zuo Cheng, Han Liu, Tianzhu Xiong, Bo Wang
{"title":"A 22-nm FDSOI 8T SRAM Based Time-Domain CIM for Energy-Efficient DNN Accelerators","authors":"Yongliang Zhou, Zuo Cheng, Han Liu, Tianzhu Xiong, Bo Wang","doi":"10.1109/APCCAS55924.2022.10090315","DOIUrl":null,"url":null,"abstract":"In memory computation for Deep neural networks (DNNs) applications is an attractive approach to improve the energy efficiency of MAC operations under a memory-wall constraint, since it is highly parallel and can save a great amount of computation and memory access power. In this paper, we propose a time-domain compute in memory (CIM) design based on Fully Depleted Silicon On Insulator (FD-SOI) 8T SRAM. A $128\\mathrm{x}128$ 8T SRAM bit-cell array is built for processing a vector-matrix multiplication (or parallel dot-products) with $8\\mathrm{x}$ binary (0 or 1) inputs, in-array 8-bits weights, and 8bits output precision for DNN applications. The column-wise TDC converts the delay accumulation results to 8bits output codes using replica bit-cells for each conversion. Monte-Carlo simulations have verified both linearity and process variation. The energy efficiency of the 8bits operation is 32.8TOPS/W at 8bits TDC mode using 0.9V supply and 20MHz.","PeriodicalId":243739,"journal":{"name":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","volume":"406 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APCCAS55924.2022.10090315","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In memory computation for Deep neural networks (DNNs) applications is an attractive approach to improve the energy efficiency of MAC operations under a memory-wall constraint, since it is highly parallel and can save a great amount of computation and memory access power. In this paper, we propose a time-domain compute in memory (CIM) design based on Fully Depleted Silicon On Insulator (FD-SOI) 8T SRAM. A $128\mathrm{x}128$ 8T SRAM bit-cell array is built for processing a vector-matrix multiplication (or parallel dot-products) with $8\mathrm{x}$ binary (0 or 1) inputs, in-array 8-bits weights, and 8bits output precision for DNN applications. The column-wise TDC converts the delay accumulation results to 8bits output codes using replica bit-cells for each conversion. Monte-Carlo simulations have verified both linearity and process variation. The energy efficiency of the 8bits operation is 32.8TOPS/W at 8bits TDC mode using 0.9V supply and 20MHz.
基于22nm FDSOI 8T SRAM的高效DNN加速器时域CIM
在内存计算中,深度神经网络(Deep neural networks, dnn)由于其高度并行性,可以节省大量的计算和内存访问功率,是在内存墙约束下提高MAC操作能量效率的一种有吸引力的方法。本文提出了一种基于全耗尽绝缘体上硅(FD-SOI) 8T SRAM的时域内存计算(CIM)设计。$128\mathrm{x}128$ 8T SRAM位单元阵列用于处理向量矩阵乘法(或并行点积),具有$8\mathrm{x}$二进制(0或1)输入,数组内8位权重和DNN应用的8位输出精度。列式TDC将延迟累积结果转换为8位输出代码,每次转换使用复制位单元。蒙特卡罗模拟验证了线性和过程变化。在使用0.9V电源和20MHz的8bit TDC模式下,8bit操作的能量效率为32.8TOPS/W。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信