{"title":"An Efficient Variation-tolerant Method for RRAM-based Neural Network","authors":"Chenglong Huang, Nuo Xu, Junjun Wang, L. Fang","doi":"10.1109/icet55676.2022.9825190","DOIUrl":null,"url":null,"abstract":"Resistive Random Access Memory (RRAM) is a promising technology for efficient neural computing systems with low power, non-volatile, and good compatibility with CMOS. The RRAM based crossbar is usually employed to accelerate deep neural networks (DNN) because of its intrinsic characteristic of executing multiplication-and-accumulation (MAC) operation according to Kirchhoff’s law. However, there are some realistic device issues especially the variation, intrinsic stochastic behavior of RRAM devices, resulting in a significant inference accuracy degradation. In this work, we propose an efficient method that employ the scaling coefficients to improve learning capabilities by providing greater model capacity and compensating for the large information loss due to quantization and device variation. Further, the stochastic noise is added to the weights in training for mimicking device variation to enhance the robustness of DNN to the parameter’s variation. We evaluate our method with different mapping methods and initialization conditions of scaling coefficients. Simulation results indicate that our method can rescue the computing accuracy under device variation considering various benchmark datasets(MNIST, Fashion MNIST and CIFAR-10).","PeriodicalId":166358,"journal":{"name":"2022 IEEE 5th International Conference on Electronics Technology (ICET)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 5th International Conference on Electronics Technology (ICET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icet55676.2022.9825190","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Resistive Random Access Memory (RRAM) is a promising technology for efficient neural computing systems with low power, non-volatile, and good compatibility with CMOS. The RRAM based crossbar is usually employed to accelerate deep neural networks (DNN) because of its intrinsic characteristic of executing multiplication-and-accumulation (MAC) operation according to Kirchhoff’s law. However, there are some realistic device issues especially the variation, intrinsic stochastic behavior of RRAM devices, resulting in a significant inference accuracy degradation. In this work, we propose an efficient method that employ the scaling coefficients to improve learning capabilities by providing greater model capacity and compensating for the large information loss due to quantization and device variation. Further, the stochastic noise is added to the weights in training for mimicking device variation to enhance the robustness of DNN to the parameter’s variation. We evaluate our method with different mapping methods and initialization conditions of scaling coefficients. Simulation results indicate that our method can rescue the computing accuracy under device variation considering various benchmark datasets(MNIST, Fashion MNIST and CIFAR-10).