A. Lu, Xiaochen Peng, Wantong Li, Hongwu Jiang, Shimeng Yu
{"title":"NeuroSim Validation with 40nm RRAM Compute-in-Memory Macro","authors":"A. Lu, Xiaochen Peng, Wantong Li, Hongwu Jiang, Shimeng Yu","doi":"10.1109/AICAS51828.2021.9458501","DOIUrl":null,"url":null,"abstract":"Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and-accumulate (MAC) operations in deep neural network (DNN) hardware accelerators. A simulator with options of various mainstream and emerging memory technologies, architectures and networks can be a great convenience for fast early-stage design space exploration of CIM accelerators. DNN+NeuroSim is an integrated benchmark framework supporting flexible and hierarchical CIM array design options from device-level, to circuit-level and up to algorithm-level. In this paper, we validate and calibrate the prediction of NeuroSim against a 40nm RRAM-based CIM macro post-layout simulations. First, the parameters of memory device and CMOS transistor are extracted from the TSMC’s PDK and employed on the NeuroSim settings; the peripheral modules and operating process are also configured to be the same as the actual chip. Next, the area, critical path and energy consumption values from the SPICE simulations at the module-level are compared with those from NeuroSim. Some adjustment factors are introduced to account for transistor sizing and wiring area in layout, gate switching activity and post-layout performance drop, etc. We show that the prediction from NeuroSim is precise with chip-level error under 1% after the calibration.","PeriodicalId":173204,"journal":{"name":"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS51828.2021.9458501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Compute-in-memory (CIM) is an attractive solution to process the extensive workloads of multiply-and-accumulate (MAC) operations in deep neural network (DNN) hardware accelerators. A simulator with options of various mainstream and emerging memory technologies, architectures and networks can be a great convenience for fast early-stage design space exploration of CIM accelerators. DNN+NeuroSim is an integrated benchmark framework supporting flexible and hierarchical CIM array design options from device-level, to circuit-level and up to algorithm-level. In this paper, we validate and calibrate the prediction of NeuroSim against a 40nm RRAM-based CIM macro post-layout simulations. First, the parameters of memory device and CMOS transistor are extracted from the TSMC’s PDK and employed on the NeuroSim settings; the peripheral modules and operating process are also configured to be the same as the actual chip. Next, the area, critical path and energy consumption values from the SPICE simulations at the module-level are compared with those from NeuroSim. Some adjustment factors are introduced to account for transistor sizing and wiring area in layout, gate switching activity and post-layout performance drop, etc. We show that the prediction from NeuroSim is precise with chip-level error under 1% after the calibration.