A Hardware Friendly Variation-Tolerant Framework for RRAM-Based Neuromorphic Computing

IF 5.2 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Fang-Yi Gu;Cheng-Han Yang;Ing-Chao Lin;Da-Wei Chang;Darsen D. Lu;Ulf Schlichtmann
{"title":"A Hardware Friendly Variation-Tolerant Framework for RRAM-Based Neuromorphic Computing","authors":"Fang-Yi Gu;Cheng-Han Yang;Ing-Chao Lin;Da-Wei Chang;Darsen D. Lu;Ulf Schlichtmann","doi":"10.1109/TCSI.2024.3443180","DOIUrl":null,"url":null,"abstract":"Emerging resistive random access memory (RRAM) attracts considerable interest in computing-in-memory by its high efficiency in multiply-accumulate operation, which is the key computation in the neural network (NN). However, due to the imperfect fabrication, RRAM cells suffer from the variations, which make the values in RRAM cells deviate from the target values so that the accuracy of the RRAM-based NN accelerator degrades significantly. Moreover, in a practical hardware design of RRAM-based NN accelerators, if the number of wordlines and bitlines in a crossbar array activated at the same time increases, ADCs with a high resolution are required and the power consumption of ADC increases. This paper proposes a novel methodology to mitigate the impact of variations in RRAM-based neural network accelerators. The methodology includes a unary-based non-uniform quantization method and a variation-aware operation unit (OU) based framework. The unary-based non-uniform quantization method equalizes the significance of weights stored in each RRAM cell to reduce the impact of variations. The variation-aware OU-based framework activates only RRAM cells in the same OU at the same time, which reduces the power consumption of ADCs. Additionally, the framework introduces three methods, including OU skipping, OU recombination, and OU compensation, to further mitigate the impact of variations. The experiments show that the proposed approach outperforms the state-of-the-art among four NN models on two datasets with 2-bit cell resolution.","PeriodicalId":13039,"journal":{"name":"IEEE Transactions on Circuits and Systems I: Regular Papers","volume":"71 12","pages":"6419-6432"},"PeriodicalIF":5.2000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems I: Regular Papers","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10685551/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Emerging resistive random access memory (RRAM) attracts considerable interest in computing-in-memory by its high efficiency in multiply-accumulate operation, which is the key computation in the neural network (NN). However, due to the imperfect fabrication, RRAM cells suffer from the variations, which make the values in RRAM cells deviate from the target values so that the accuracy of the RRAM-based NN accelerator degrades significantly. Moreover, in a practical hardware design of RRAM-based NN accelerators, if the number of wordlines and bitlines in a crossbar array activated at the same time increases, ADCs with a high resolution are required and the power consumption of ADC increases. This paper proposes a novel methodology to mitigate the impact of variations in RRAM-based neural network accelerators. The methodology includes a unary-based non-uniform quantization method and a variation-aware operation unit (OU) based framework. The unary-based non-uniform quantization method equalizes the significance of weights stored in each RRAM cell to reduce the impact of variations. The variation-aware OU-based framework activates only RRAM cells in the same OU at the same time, which reduces the power consumption of ADCs. Additionally, the framework introduces three methods, including OU skipping, OU recombination, and OU compensation, to further mitigate the impact of variations. The experiments show that the proposed approach outperforms the state-of-the-art among four NN models on two datasets with 2-bit cell resolution.
基于 RRAM 的神经形态计算的硬件友好型变异容忍框架
新兴的电阻式随机存取存储器(RRAM)在神经网络(NN)的关键计算--乘积运算中具有很高的效率,因此在内存计算领域引起了广泛关注。然而,由于制造工艺的不完善,RRAM 单元会出现偏差,使 RRAM 单元中的值偏离目标值,从而使基于 RRAM 的神经网络加速器的精度大大降低。此外,在基于 RRAM 的 NN 加速器的实际硬件设计中,如果交叉条阵列中同时激活的字线和位线数量增加,则需要高分辨率的 ADC,ADC 的功耗也会增加。本文提出了一种新方法来减轻基于 RRAM 的神经网络加速器的变化影响。该方法包括基于一元的非均匀量化方法和基于变化感知操作单元(OU)的框架。基于一元的非均匀量化方法均衡了存储在每个 RRAM 单元中的权重的重要性,以减少变异的影响。基于变异感知操作单元的框架只在同一时间激活同一操作单元中的 RRAM 单元,从而降低了 ADC 的功耗。此外,该框架还引入了三种方法,包括 OU 跳过、OU 重组和 OU 补偿,以进一步减轻变化的影响。实验表明,在两个单元分辨率为 2 位的数据集上,在四种 NN 模型中,所提出的方法优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Circuits and Systems I: Regular Papers
IEEE Transactions on Circuits and Systems I: Regular Papers 工程技术-工程:电子与电气
CiteScore
9.80
自引率
11.80%
发文量
441
审稿时长
2 months
期刊介绍: TCAS I publishes regular papers in the field specified by the theory, analysis, design, and practical implementations of circuits, and the application of circuit techniques to systems and to signal processing. Included is the whole spectrum from basic scientific theory to industrial applications. The field of interest covered includes: - Circuits: Analog, Digital and Mixed Signal Circuits and Systems - Nonlinear Circuits and Systems, Integrated Sensors, MEMS and Systems on Chip, Nanoscale Circuits and Systems, Optoelectronic - Circuits and Systems, Power Electronics and Systems - Software for Analog-and-Logic Circuits and Systems - Control aspects of Circuits and Systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信