{"title":"基于异构操作单元的基于reram的PIM加速器的高效DNN推理","authors":"Gaurav Narang;Janardhan Rao Doppa;Partha Pratim Pande","doi":"10.1109/TCAD.2024.3514778","DOIUrl":null,"url":null,"abstract":"Operation unit (OU)-based configurations enable the design of energy-efficient and reliable ReRAM crossbar-based processing-in-memory (PIM) architectures for deep neural network (DNN) inferencing. To exploit sparsity and tackle crossbar nonidealities, matrix-vector-multiplication (MVM) operations are computed at a much smaller level of granularity than a full crossbar, referred to as OUs. However, determining the suitable OU size for a given DNN workload presents a nontrivial challenge as the DNN layers exhibit different levels of sparsity and have varying impact on overall predictive accuracy. In this article, we propose a framework for designing heterogeneous OU-based PIM accelerators. The OU configurations vary based on the characteristics of the neural layers and the time-dependent conductance drift of PIM devices due to repeated inference runs. Overall, our experimental results demonstrate that the sparsity-aware layer-wise heterogeneous OU-based PIM computation reduces latency and energy by 34% and 73% on average, respectively, compared to state-of-the-art homogeneous OU-based architectures without compromising the predictive accuracy.","PeriodicalId":13251,"journal":{"name":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","volume":"44 6","pages":"2130-2143"},"PeriodicalIF":2.9000,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Energy-Efficient DNN Inferencing on ReRAM-Based PIM Accelerators Using Heterogeneous Operation Units\",\"authors\":\"Gaurav Narang;Janardhan Rao Doppa;Partha Pratim Pande\",\"doi\":\"10.1109/TCAD.2024.3514778\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Operation unit (OU)-based configurations enable the design of energy-efficient and reliable ReRAM crossbar-based processing-in-memory (PIM) architectures for deep neural network (DNN) inferencing. To exploit sparsity and tackle crossbar nonidealities, matrix-vector-multiplication (MVM) operations are computed at a much smaller level of granularity than a full crossbar, referred to as OUs. However, determining the suitable OU size for a given DNN workload presents a nontrivial challenge as the DNN layers exhibit different levels of sparsity and have varying impact on overall predictive accuracy. In this article, we propose a framework for designing heterogeneous OU-based PIM accelerators. The OU configurations vary based on the characteristics of the neural layers and the time-dependent conductance drift of PIM devices due to repeated inference runs. Overall, our experimental results demonstrate that the sparsity-aware layer-wise heterogeneous OU-based PIM computation reduces latency and energy by 34% and 73% on average, respectively, compared to state-of-the-art homogeneous OU-based architectures without compromising the predictive accuracy.\",\"PeriodicalId\":13251,\"journal\":{\"name\":\"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems\",\"volume\":\"44 6\",\"pages\":\"2130-2143\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-12-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10787259/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10787259/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Energy-Efficient DNN Inferencing on ReRAM-Based PIM Accelerators Using Heterogeneous Operation Units
Operation unit (OU)-based configurations enable the design of energy-efficient and reliable ReRAM crossbar-based processing-in-memory (PIM) architectures for deep neural network (DNN) inferencing. To exploit sparsity and tackle crossbar nonidealities, matrix-vector-multiplication (MVM) operations are computed at a much smaller level of granularity than a full crossbar, referred to as OUs. However, determining the suitable OU size for a given DNN workload presents a nontrivial challenge as the DNN layers exhibit different levels of sparsity and have varying impact on overall predictive accuracy. In this article, we propose a framework for designing heterogeneous OU-based PIM accelerators. The OU configurations vary based on the characteristics of the neural layers and the time-dependent conductance drift of PIM devices due to repeated inference runs. Overall, our experimental results demonstrate that the sparsity-aware layer-wise heterogeneous OU-based PIM computation reduces latency and energy by 34% and 73% on average, respectively, compared to state-of-the-art homogeneous OU-based architectures without compromising the predictive accuracy.
期刊介绍:
The purpose of this Transactions is to publish papers of interest to individuals in the area of computer-aided design of integrated circuits and systems composed of analog, digital, mixed-signal, optical, or microwave components. The aids include methods, models, algorithms, and man-machine interfaces for system-level, physical and logical design including: planning, synthesis, partitioning, modeling, simulation, layout, verification, testing, hardware-software co-design and documentation of integrated circuit and system designs of all complexities. Design tools and techniques for evaluating and designing integrated circuits and systems for metrics such as performance, power, reliability, testability, and security are a focus.