Compression or Corruption? A Study on the Effects of Transient Faults on BNN Inference Accelerators

N. Khoshavi, Connor Broyles, Yu Bi
{"title":"Compression or Corruption? A Study on the Effects of Transient Faults on BNN Inference Accelerators","authors":"N. Khoshavi, Connor Broyles, Yu Bi","doi":"10.1109/ISQED48828.2020.9137006","DOIUrl":null,"url":null,"abstract":"Over past years, the philosophy for designing the artificial intelligence algorithms has significantly shifted towards automatically extracting the composable systems from massive data volumes. This paradigm shift has been expedited by the big data booming which enables us to easily access and analyze the highly large data sets. The most well-known class of big data analysis techniques is called deep learning. These models require significant computation power and extremely high memory accesses which necessitate the design of novel approaches to reduce the memory access and improve power efficiency while taking into account the development of domain-specific hardware accelerators to support the current and future data sizes and model structures. The current trends for designing application-specific integrated circuits barely consider the essential requirement for maintaining the complex neural network computation to be resilient in the presence of soft errors. The soft errors might strike either memory storage or combinational logic in the hardware accelerator that can affect the architectural behavior such that the precision of the results fall behind the minimum allowable correctness. In this study, we demonstrate that the impact of soft errors on a customized deep learning algorithm called Binarized Neural Network might cause drastic image misclassification. Our experimental results show that the accuracy of image classifier can drastically drop by 76.70% and 19.25% in IfcW1A1 and cnvW1A1 networks, respectively across CIFAR-10 and MNIST datasets during the fault injection for the worst-case scenarios.","PeriodicalId":225828,"journal":{"name":"2020 21st International Symposium on Quality Electronic Design (ISQED)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 21st International Symposium on Quality Electronic Design (ISQED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISQED48828.2020.9137006","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Over past years, the philosophy for designing the artificial intelligence algorithms has significantly shifted towards automatically extracting the composable systems from massive data volumes. This paradigm shift has been expedited by the big data booming which enables us to easily access and analyze the highly large data sets. The most well-known class of big data analysis techniques is called deep learning. These models require significant computation power and extremely high memory accesses which necessitate the design of novel approaches to reduce the memory access and improve power efficiency while taking into account the development of domain-specific hardware accelerators to support the current and future data sizes and model structures. The current trends for designing application-specific integrated circuits barely consider the essential requirement for maintaining the complex neural network computation to be resilient in the presence of soft errors. The soft errors might strike either memory storage or combinational logic in the hardware accelerator that can affect the architectural behavior such that the precision of the results fall behind the minimum allowable correctness. In this study, we demonstrate that the impact of soft errors on a customized deep learning algorithm called Binarized Neural Network might cause drastic image misclassification. Our experimental results show that the accuracy of image classifier can drastically drop by 76.70% and 19.25% in IfcW1A1 and cnvW1A1 networks, respectively across CIFAR-10 and MNIST datasets during the fault injection for the worst-case scenarios.
压缩还是腐败?暂态故障对BNN推理加速器影响的研究
在过去的几年里,设计人工智能算法的理念已经明显转向从海量数据中自动提取可组合系统。大数据的蓬勃发展加速了这种范式转变,使我们能够轻松访问和分析高度庞大的数据集。最著名的一类大数据分析技术被称为深度学习。这些模型需要显著的计算能力和极高的内存访问,这就需要设计新颖的方法来减少内存访问和提高功率效率,同时考虑到特定领域硬件加速器的开发,以支持当前和未来的数据大小和模型结构。目前设计专用集成电路的趋势很少考虑在存在软错误时保持复杂神经网络计算的弹性这一基本要求。软错误可能会影响硬件加速器中的内存存储或组合逻辑,从而影响体系结构行为,使结果的精度低于最小允许的正确性。在本研究中,我们证明了软错误对称为二值化神经网络的定制深度学习算法的影响可能会导致严重的图像错误分类。实验结果表明,在CIFAR-10和MNIST数据集上,IfcW1A1和cnvW1A1网络在最坏情况下的故障注入过程中,图像分类器的准确率分别下降了76.70%和19.25%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信