{"title":"SysCIM: A Heterogeneous Chip Architecture for High-Efficiency CNN Training at Edge","authors":"Shuai Wang;Ziwei Li;Yuang Ma;Yi Kang","doi":"10.1109/TVLSI.2025.3526363","DOIUrl":null,"url":null,"abstract":"Neural network training is notoriously computationally intensive and time-consuming. Quantization technology is promising to improve training efficiency by using lower data bitwidths to reduce storage and computing requirements. Currently, state-of-the-art quantization training algorithms have a negligible loss of accuracy, which requires dedicated quantization circuits for dynamic quantization of large amounts of data. In addition, the matrix transposition problem during neural network training gradually becomes a challenge as the network size increases. To address this problem, we propose a quantized training architecture which is a heterogeneous architecture consisting of a computing-in-memory (CIM) macro and a systolic array. First, the CIM macro realizes efficient transpose matrix multiplication through flexible data path control, which handles the need for transpose operation of the weight matrix in neural network training. Second, the systolic array utilizes two different data flows in the forward (FW) and backward (BW) propagation for the transpose matrix multiplication of the activation matrix in neural network training and provides higher computational throughput. Then, we design efficient dedicated quantization circuits for quantization algorithms to support efficient quantization training. Experimental results show that the area and power consumption of the two specialized quantization circuits are reduced by a factor of 1.35 and 5.4, on average, compared to floating-point computing circuits. The architecture achieves 4.05 tera operations per second per wat (TOPS/W) energy efficiency @ INT8 convolutional neural network (CNN) training at the 28-nm process. Compared to a state of the art (SOTA) quantization training architecture, SysCIM shows <inline-formula> <tex-math>$1.8\\times $ </tex-math></inline-formula> energy efficiency.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 4","pages":"990-1003"},"PeriodicalIF":2.8000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10843320/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Neural network training is notoriously computationally intensive and time-consuming. Quantization technology is promising to improve training efficiency by using lower data bitwidths to reduce storage and computing requirements. Currently, state-of-the-art quantization training algorithms have a negligible loss of accuracy, which requires dedicated quantization circuits for dynamic quantization of large amounts of data. In addition, the matrix transposition problem during neural network training gradually becomes a challenge as the network size increases. To address this problem, we propose a quantized training architecture which is a heterogeneous architecture consisting of a computing-in-memory (CIM) macro and a systolic array. First, the CIM macro realizes efficient transpose matrix multiplication through flexible data path control, which handles the need for transpose operation of the weight matrix in neural network training. Second, the systolic array utilizes two different data flows in the forward (FW) and backward (BW) propagation for the transpose matrix multiplication of the activation matrix in neural network training and provides higher computational throughput. Then, we design efficient dedicated quantization circuits for quantization algorithms to support efficient quantization training. Experimental results show that the area and power consumption of the two specialized quantization circuits are reduced by a factor of 1.35 and 5.4, on average, compared to floating-point computing circuits. The architecture achieves 4.05 tera operations per second per wat (TOPS/W) energy efficiency @ INT8 convolutional neural network (CNN) training at the 28-nm process. Compared to a state of the art (SOTA) quantization training architecture, SysCIM shows $1.8\times $ energy efficiency.
期刊介绍:
The IEEE Transactions on VLSI Systems is published as a monthly journal under the co-sponsorship of the IEEE Circuits and Systems Society, the IEEE Computer Society, and the IEEE Solid-State Circuits Society.
Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
To address this critical area through a common forum, the IEEE Transactions on VLSI Systems have been founded. The editorial board, consisting of international experts, invites original papers which emphasize and merit the novel systems integration aspects of microelectronic systems including interactions among systems design and partitioning, logic and memory design, digital and analog circuit design, layout synthesis, CAD tools, chips and wafer fabrication, testing and packaging, and systems level qualification. Thus, the coverage of these Transactions will focus on VLSI/ULSI microelectronic systems integration.