基于随机存储器的神经网络的一种有效的容差方法

Chenglong Huang, Nuo Xu, Junjun Wang, L. Fang
{"title":"基于随机存储器的神经网络的一种有效的容差方法","authors":"Chenglong Huang, Nuo Xu, Junjun Wang, L. Fang","doi":"10.1109/icet55676.2022.9825190","DOIUrl":null,"url":null,"abstract":"Resistive Random Access Memory (RRAM) is a promising technology for efficient neural computing systems with low power, non-volatile, and good compatibility with CMOS. The RRAM based crossbar is usually employed to accelerate deep neural networks (DNN) because of its intrinsic characteristic of executing multiplication-and-accumulation (MAC) operation according to Kirchhoff’s law. However, there are some realistic device issues especially the variation, intrinsic stochastic behavior of RRAM devices, resulting in a significant inference accuracy degradation. In this work, we propose an efficient method that employ the scaling coefficients to improve learning capabilities by providing greater model capacity and compensating for the large information loss due to quantization and device variation. Further, the stochastic noise is added to the weights in training for mimicking device variation to enhance the robustness of DNN to the parameter’s variation. We evaluate our method with different mapping methods and initialization conditions of scaling coefficients. Simulation results indicate that our method can rescue the computing accuracy under device variation considering various benchmark datasets(MNIST, Fashion MNIST and CIFAR-10).","PeriodicalId":166358,"journal":{"name":"2022 IEEE 5th International Conference on Electronics Technology (ICET)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Efficient Variation-tolerant Method for RRAM-based Neural Network\",\"authors\":\"Chenglong Huang, Nuo Xu, Junjun Wang, L. Fang\",\"doi\":\"10.1109/icet55676.2022.9825190\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Resistive Random Access Memory (RRAM) is a promising technology for efficient neural computing systems with low power, non-volatile, and good compatibility with CMOS. The RRAM based crossbar is usually employed to accelerate deep neural networks (DNN) because of its intrinsic characteristic of executing multiplication-and-accumulation (MAC) operation according to Kirchhoff’s law. However, there are some realistic device issues especially the variation, intrinsic stochastic behavior of RRAM devices, resulting in a significant inference accuracy degradation. In this work, we propose an efficient method that employ the scaling coefficients to improve learning capabilities by providing greater model capacity and compensating for the large information loss due to quantization and device variation. Further, the stochastic noise is added to the weights in training for mimicking device variation to enhance the robustness of DNN to the parameter’s variation. We evaluate our method with different mapping methods and initialization conditions of scaling coefficients. Simulation results indicate that our method can rescue the computing accuracy under device variation considering various benchmark datasets(MNIST, Fashion MNIST and CIFAR-10).\",\"PeriodicalId\":166358,\"journal\":{\"name\":\"2022 IEEE 5th International Conference on Electronics Technology (ICET)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 5th International Conference on Electronics Technology (ICET)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icet55676.2022.9825190\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 5th International Conference on Electronics Technology (ICET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icet55676.2022.9825190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

电阻式随机存取存储器(RRAM)具有低功耗、非易失性和与CMOS良好的兼容性,是一种很有前途的高效神经计算系统技术。基于RRAM的交叉棒基于基尔霍夫定律(Kirchhoff’s law)执行乘法累加运算的固有特性,通常被用于深度神经网络(DNN)的加速。然而,现实中存在一些器件问题,特别是随机存储器器件的变化性、固有的随机性,导致了推理精度的显著下降。在这项工作中,我们提出了一种有效的方法,通过提供更大的模型容量和补偿由于量化和设备变化导致的大量信息损失,利用缩放系数来提高学习能力。在模拟设备变化的训练权值中加入随机噪声,增强深度神经网络对参数变化的鲁棒性。我们用不同的映射方法和缩放系数的初始化条件来评价我们的方法。仿真结果表明,考虑不同的基准数据集(MNIST、Fashion MNIST和CIFAR-10),我们的方法可以挽救设备变化下的计算精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Efficient Variation-tolerant Method for RRAM-based Neural Network
Resistive Random Access Memory (RRAM) is a promising technology for efficient neural computing systems with low power, non-volatile, and good compatibility with CMOS. The RRAM based crossbar is usually employed to accelerate deep neural networks (DNN) because of its intrinsic characteristic of executing multiplication-and-accumulation (MAC) operation according to Kirchhoff’s law. However, there are some realistic device issues especially the variation, intrinsic stochastic behavior of RRAM devices, resulting in a significant inference accuracy degradation. In this work, we propose an efficient method that employ the scaling coefficients to improve learning capabilities by providing greater model capacity and compensating for the large information loss due to quantization and device variation. Further, the stochastic noise is added to the weights in training for mimicking device variation to enhance the robustness of DNN to the parameter’s variation. We evaluate our method with different mapping methods and initialization conditions of scaling coefficients. Simulation results indicate that our method can rescue the computing accuracy under device variation considering various benchmark datasets(MNIST, Fashion MNIST and CIFAR-10).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信