{"title":"RRAM-based Analog In-Memory Computing : Invited Paper","authors":"Xiaoming Chen, Tao Song, Yinhe Han","doi":"10.1109/NANOARCH53687.2021.9642235","DOIUrl":null,"url":null,"abstract":"Despite resistive random-access memories (RRAMs) have the ability of analog in-memory computing and they can be utilized to accelerate some applications (e.g., neural networks), the analog-digital interface consumes considerable overhead and may even counteract the benefits brought by RRAM-based inmemory computing. In this paper, we introduce how to reduce or eliminate the overhead of the analog-digital interface in RRAM-based neural network accelerators and linear solver accelerators. In the former, we create an analog inference flow and introduce a new methodology to accelerate the entire analog flow by using resistive content-addressable memories (RCAMs). Redundant analog-to-digital conversions are eliminated. In the latter, we provide an approach to map classical iterative solvers onto RRAM-based crossbar arrays such that the hardware can get the solution in O(1) time complexity without actual iterations, and thus, intermediate analog-to-digital conversions and digital-to-analog conversions are completely eliminated. Simulation results have proven the superiorities in the performance and energy efficiency of our approaches. The accuracy problem of RRAM-based analog computing will be a future research focus.","PeriodicalId":424982,"journal":{"name":"2021 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NANOARCH53687.2021.9642235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Despite resistive random-access memories (RRAMs) have the ability of analog in-memory computing and they can be utilized to accelerate some applications (e.g., neural networks), the analog-digital interface consumes considerable overhead and may even counteract the benefits brought by RRAM-based inmemory computing. In this paper, we introduce how to reduce or eliminate the overhead of the analog-digital interface in RRAM-based neural network accelerators and linear solver accelerators. In the former, we create an analog inference flow and introduce a new methodology to accelerate the entire analog flow by using resistive content-addressable memories (RCAMs). Redundant analog-to-digital conversions are eliminated. In the latter, we provide an approach to map classical iterative solvers onto RRAM-based crossbar arrays such that the hardware can get the solution in O(1) time complexity without actual iterations, and thus, intermediate analog-to-digital conversions and digital-to-analog conversions are completely eliminated. Simulation results have proven the superiorities in the performance and energy efficiency of our approaches. The accuracy problem of RRAM-based analog computing will be a future research focus.