基于早期去噪和稀疏性诱导的可靠记忆神经网络加速器

Anlan Yu, Ning Lyu, Wujie Wen, Zhiyuan Yan
{"title":"基于早期去噪和稀疏性诱导的可靠记忆神经网络加速器","authors":"Anlan Yu, Ning Lyu, Wujie Wen, Zhiyuan Yan","doi":"10.1109/ASP-DAC52403.2022.9712525","DOIUrl":null,"url":null,"abstract":"Implementing deep neural networks (DNNs) in hardware is challenging due to the requirements of huge memory and computation associated with DNNs' primary operation—matrix-vector multiplications (MVMs). Memristive crossbar shows great potential to accelerate MVMs by leveraging its capability of in-memory computation. However, one critical obstacle to such a technique is potentially significant inference accuracy degradation caused by two primary sources of errors—the variations during computation and stuck-at-faults (SAFs). To overcome this obstacle, we propose a set of dedicated schemes to significantly enhance its tolerance against these errors. First, a minimum mean square error (MMSE) based denoising scheme is proposed to diminish the impact of variations during computation in the intermediate layers. To the best of our knowledge, this is the first work considering denoising in the intermediate layers without extra crossbar resources. Furthermore, MMSE early denoising not only stabilizes the crossbar computation results but also mitigates errors caused by low resolution analog-to-digital converters. Second, we propose a weights-to-crossbar mapping scheme by inverting bits to mitigate the impact of SAFs. The effectiveness of the proposed bit inversion scheme is analyzed theoretically and demonstrated experimentally. Finally, we propose to use L1 regularization to increase the network sparsity, as a greater sparsity not only further enhances the effectiveness of the proposed bit inversion scheme, but also facilitates other early denoising mechanisms. Experimental results show that our schemes can achieve 40%-78% accuracy improvement, for the MNIST and CIFAR10 classification tasks under different networks.","PeriodicalId":239260,"journal":{"name":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Reliable Memristive Neural Network Accelerators Based on Early Denoising and Sparsity Induction\",\"authors\":\"Anlan Yu, Ning Lyu, Wujie Wen, Zhiyuan Yan\",\"doi\":\"10.1109/ASP-DAC52403.2022.9712525\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Implementing deep neural networks (DNNs) in hardware is challenging due to the requirements of huge memory and computation associated with DNNs' primary operation—matrix-vector multiplications (MVMs). Memristive crossbar shows great potential to accelerate MVMs by leveraging its capability of in-memory computation. However, one critical obstacle to such a technique is potentially significant inference accuracy degradation caused by two primary sources of errors—the variations during computation and stuck-at-faults (SAFs). To overcome this obstacle, we propose a set of dedicated schemes to significantly enhance its tolerance against these errors. First, a minimum mean square error (MMSE) based denoising scheme is proposed to diminish the impact of variations during computation in the intermediate layers. To the best of our knowledge, this is the first work considering denoising in the intermediate layers without extra crossbar resources. Furthermore, MMSE early denoising not only stabilizes the crossbar computation results but also mitigates errors caused by low resolution analog-to-digital converters. Second, we propose a weights-to-crossbar mapping scheme by inverting bits to mitigate the impact of SAFs. The effectiveness of the proposed bit inversion scheme is analyzed theoretically and demonstrated experimentally. Finally, we propose to use L1 regularization to increase the network sparsity, as a greater sparsity not only further enhances the effectiveness of the proposed bit inversion scheme, but also facilitates other early denoising mechanisms. Experimental results show that our schemes can achieve 40%-78% accuracy improvement, for the MNIST and CIFAR10 classification tasks under different networks.\",\"PeriodicalId\":239260,\"journal\":{\"name\":\"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASP-DAC52403.2022.9712525\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASP-DAC52403.2022.9712525","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在硬件上实现深度神经网络(dnn)是具有挑战性的,因为dnn的主要操作-矩阵向量乘法(MVMs)需要大量的内存和计算。记忆交叉条利用其在内存中的计算能力,在加速mvm方面显示出巨大的潜力。然而,这种技术的一个关键障碍是由两个主要的错误来源——计算过程中的变化和故障卡滞(saf)——引起的潜在的显著推理精度下降。为了克服这一障碍,我们提出了一套专用方案,以显着提高其对这些错误的容忍度。首先,提出了一种基于最小均方误差(MMSE)的去噪方案,以减小中间层计算过程中变化的影响。据我们所知,这是第一个考虑在没有额外横杆资源的情况下在中间层去噪的工作。此外,MMSE早期去噪不仅稳定了交叉条计算结果,而且减轻了低分辨率模数转换器带来的误差。其次,我们提出了一种权重到交叉杆的映射方案,通过反转比特来减轻af的影响。理论分析和实验验证了所提出的位反转方案的有效性。最后,我们建议使用L1正则化来增加网络的稀疏性,因为更大的稀疏性不仅可以进一步增强所提出的位反转方案的有效性,而且还可以促进其他早期去噪机制。实验结果表明,对于不同网络下的MNIST和CIFAR10分类任务,我们的方案的准确率提高了40% ~ 78%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reliable Memristive Neural Network Accelerators Based on Early Denoising and Sparsity Induction
Implementing deep neural networks (DNNs) in hardware is challenging due to the requirements of huge memory and computation associated with DNNs' primary operation—matrix-vector multiplications (MVMs). Memristive crossbar shows great potential to accelerate MVMs by leveraging its capability of in-memory computation. However, one critical obstacle to such a technique is potentially significant inference accuracy degradation caused by two primary sources of errors—the variations during computation and stuck-at-faults (SAFs). To overcome this obstacle, we propose a set of dedicated schemes to significantly enhance its tolerance against these errors. First, a minimum mean square error (MMSE) based denoising scheme is proposed to diminish the impact of variations during computation in the intermediate layers. To the best of our knowledge, this is the first work considering denoising in the intermediate layers without extra crossbar resources. Furthermore, MMSE early denoising not only stabilizes the crossbar computation results but also mitigates errors caused by low resolution analog-to-digital converters. Second, we propose a weights-to-crossbar mapping scheme by inverting bits to mitigate the impact of SAFs. The effectiveness of the proposed bit inversion scheme is analyzed theoretically and demonstrated experimentally. Finally, we propose to use L1 regularization to increase the network sparsity, as a greater sparsity not only further enhances the effectiveness of the proposed bit inversion scheme, but also facilitates other early denoising mechanisms. Experimental results show that our schemes can achieve 40%-78% accuracy improvement, for the MNIST and CIFAR10 classification tasks under different networks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信