G. Krishnan, Sumit K. Mandai, C. Chakrabarti, Jae-sun Seo, U. Ogras, Yu Cao
{"title":"Interconnect-Centric Benchmarking of In-Memory Acceleration for DNNS","authors":"G. Krishnan, Sumit K. Mandai, C. Chakrabarti, Jae-sun Seo, U. Ogras, Yu Cao","doi":"10.1109/CSTIC52283.2021.9461480","DOIUrl":null,"url":null,"abstract":"In-memory computing (IMC) provides a dense and parallel structure for high performance and energy-efficient acceleration of deep neural networks (DNNs). The increased computational density of IMC architectures results in increased on -chip communication costs, stressing the interconnect fabric. In this work, we develop a novel performance benchmark tool for IMC architectures that incorporates device, circuits, architecture, and interconnect under a single roof. The tool assesses the area, energy, and latency of the IMC accelerator. We analyze three interconnect cases to illustrate the versatility of the tool: (1) Point-to-point (P2P) and network-on-chip (NoC) based IMC architectures to demonstrate the criticality of the interconnect choice; (2) Area and energy optimization to improve IMC utilization and reduce on-chip interconnect cost; (3) Evaluation of a reconfigurable NoC to achieve minimum on-chip communication latency. Through these studies, we motivate the need for future work in the design of optimal on-chip and off-chip interconnect fabrics for IMC architectures.","PeriodicalId":186529,"journal":{"name":"2021 China Semiconductor Technology International Conference (CSTIC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 China Semiconductor Technology International Conference (CSTIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSTIC52283.2021.9461480","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In-memory computing (IMC) provides a dense and parallel structure for high performance and energy-efficient acceleration of deep neural networks (DNNs). The increased computational density of IMC architectures results in increased on -chip communication costs, stressing the interconnect fabric. In this work, we develop a novel performance benchmark tool for IMC architectures that incorporates device, circuits, architecture, and interconnect under a single roof. The tool assesses the area, energy, and latency of the IMC accelerator. We analyze three interconnect cases to illustrate the versatility of the tool: (1) Point-to-point (P2P) and network-on-chip (NoC) based IMC architectures to demonstrate the criticality of the interconnect choice; (2) Area and energy optimization to improve IMC utilization and reduce on-chip interconnect cost; (3) Evaluation of a reconfigurable NoC to achieve minimum on-chip communication latency. Through these studies, we motivate the need for future work in the design of optimal on-chip and off-chip interconnect fabrics for IMC architectures.