{"title":"A 6T-SRAM-Based Computing-In-Memory Architecture using 22nm FD-SOI Device","authors":"S. Liu, J. J. Wang, Y. Liu, L. Cao, Y. Liu","doi":"10.1109/ICSTSN57873.2023.10151524","DOIUrl":null,"url":null,"abstract":"A computing in-memory architecture based on 22nm Fully Depleted Silicon On Insulator (FD-SOI) device is presented. By using FD-SOI devices, logic operations such as “and” “nor” or “ x “ Access Memory (SRAM) through the effect of body biasing in the $22\\mathrm{~nm}$ FD-SOI technology. The proposed architecture contains six modules: the SRAM-based Computing InMemory module, the Data Buffer module, the Pulse Generation module, the pre-charge module, AdditionActivation-Binarization module and the System Controller module. Complex ADC and DAC circuits are not involved in this design. Thereby, by using the FD-SOI devices, the convolution or dot product operations can be realized, which are always used in artificial intelligence (AI) algorithms in a very efficient way. A Binary Multi-Layer Perception (BMLP) is mapped to examine the design. Simulations shows that our design achieves 94% accuracy in the MNIST digit recognition task. And the energy efficiency is 73.03 TOPS/W, which is far beyond traditional AI accelerators and provides an efficient path for massive computing in-memory operation.","PeriodicalId":325019,"journal":{"name":"2023 2nd International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN)","volume":"62 43","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSTSN57873.2023.10151524","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A computing in-memory architecture based on 22nm Fully Depleted Silicon On Insulator (FD-SOI) device is presented. By using FD-SOI devices, logic operations such as “and” “nor” or “ x “ Access Memory (SRAM) through the effect of body biasing in the $22\mathrm{~nm}$ FD-SOI technology. The proposed architecture contains six modules: the SRAM-based Computing InMemory module, the Data Buffer module, the Pulse Generation module, the pre-charge module, AdditionActivation-Binarization module and the System Controller module. Complex ADC and DAC circuits are not involved in this design. Thereby, by using the FD-SOI devices, the convolution or dot product operations can be realized, which are always used in artificial intelligence (AI) algorithms in a very efficient way. A Binary Multi-Layer Perception (BMLP) is mapped to examine the design. Simulations shows that our design achieves 94% accuracy in the MNIST digit recognition task. And the energy efficiency is 73.03 TOPS/W, which is far beyond traditional AI accelerators and provides an efficient path for massive computing in-memory operation.