深度神经网络中关键神经元的选择性硬化

A. Ruospo, G. Gavarini, Ilaria Bragaglia, Marcello Traiola, A. Bosio, Ernesto Sánchez
{"title":"深度神经网络中关键神经元的选择性硬化","authors":"A. Ruospo, G. Gavarini, Ilaria Bragaglia, Marcello Traiola, A. Bosio, Ernesto Sánchez","doi":"10.1109/ddecs54261.2022.9770168","DOIUrl":null,"url":null,"abstract":"In the literature, it is argued that Deep Neural Networks (DNNs) possess a certain degree of robustness mainly for two reasons: their distributed and parallel architecture, and their redundancy introduced due to over provisioning. Indeed, they are made, as a matter of fact, of more neurons with respect to the minimal number required to perform the computations. It means that they could withstand errors in a bounded number of neurons and continue to function properly. However, it is also known that different neurons in DNNs have divergent fault tolerance capabilities. Neurons that contribute the least to the final prediction accuracy are less sensitive to errors. Conversely, the neurons that contribute most are considered critical because errors within them could seriously compromise the correct functionality of the DNN. This paper presents a software methodology based on a Triple Modular Redundancy technique, which aims at improving the overall reliability of the DNN, by selectively protecting a reduced set of critical neurons. Our findings indicate that the robustness of the DNNs can be enhanced, clearly, at the cost of a larger memory footprint and a small increase in the total execution time. The trade-offs as well as the improvements are discussed in the work by exploiting two DNN architectures: ResNet and DenseNet trained and tested on CIFAR-10.","PeriodicalId":334461,"journal":{"name":"2022 25th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Selective Hardening of Critical Neurons in Deep Neural Networks\",\"authors\":\"A. Ruospo, G. Gavarini, Ilaria Bragaglia, Marcello Traiola, A. Bosio, Ernesto Sánchez\",\"doi\":\"10.1109/ddecs54261.2022.9770168\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the literature, it is argued that Deep Neural Networks (DNNs) possess a certain degree of robustness mainly for two reasons: their distributed and parallel architecture, and their redundancy introduced due to over provisioning. Indeed, they are made, as a matter of fact, of more neurons with respect to the minimal number required to perform the computations. It means that they could withstand errors in a bounded number of neurons and continue to function properly. However, it is also known that different neurons in DNNs have divergent fault tolerance capabilities. Neurons that contribute the least to the final prediction accuracy are less sensitive to errors. Conversely, the neurons that contribute most are considered critical because errors within them could seriously compromise the correct functionality of the DNN. This paper presents a software methodology based on a Triple Modular Redundancy technique, which aims at improving the overall reliability of the DNN, by selectively protecting a reduced set of critical neurons. Our findings indicate that the robustness of the DNNs can be enhanced, clearly, at the cost of a larger memory footprint and a small increase in the total execution time. The trade-offs as well as the improvements are discussed in the work by exploiting two DNN architectures: ResNet and DenseNet trained and tested on CIFAR-10.\",\"PeriodicalId\":334461,\"journal\":{\"name\":\"2022 25th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)\",\"volume\":\"2012 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 25th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ddecs54261.2022.9770168\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 25th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ddecs54261.2022.9770168","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

在文献中,人们认为深度神经网络(Deep Neural Networks, dnn)具有一定程度的鲁棒性主要有两个原因:它们的分布式和并行架构,以及由于过度供应而引入的冗余。事实上,它们是由更多的神经元组成的,相对于执行计算所需的最小数量。这意味着它们可以承受有限数量神经元的错误,并继续正常工作。然而,我们也知道dnn中不同的神经元具有不同的容错能力。对最终预测精度贡献最小的神经元对误差不太敏感。相反,贡献最大的神经元被认为是关键的,因为它们内部的错误可能严重损害DNN的正确功能。本文提出了一种基于三模冗余技术的软件方法,旨在通过选择性地保护一组减少的关键神经元来提高深度神经网络的整体可靠性。我们的研究结果表明,dnn的鲁棒性可以得到增强,显然,以更大的内存占用和总执行时间的小幅增加为代价。通过利用两种DNN架构:ResNet和DenseNet在CIFAR-10上训练和测试,在工作中讨论了权衡和改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Selective Hardening of Critical Neurons in Deep Neural Networks
In the literature, it is argued that Deep Neural Networks (DNNs) possess a certain degree of robustness mainly for two reasons: their distributed and parallel architecture, and their redundancy introduced due to over provisioning. Indeed, they are made, as a matter of fact, of more neurons with respect to the minimal number required to perform the computations. It means that they could withstand errors in a bounded number of neurons and continue to function properly. However, it is also known that different neurons in DNNs have divergent fault tolerance capabilities. Neurons that contribute the least to the final prediction accuracy are less sensitive to errors. Conversely, the neurons that contribute most are considered critical because errors within them could seriously compromise the correct functionality of the DNN. This paper presents a software methodology based on a Triple Modular Redundancy technique, which aims at improving the overall reliability of the DNN, by selectively protecting a reduced set of critical neurons. Our findings indicate that the robustness of the DNNs can be enhanced, clearly, at the cost of a larger memory footprint and a small increase in the total execution time. The trade-offs as well as the improvements are discussed in the work by exploiting two DNN architectures: ResNet and DenseNet trained and tested on CIFAR-10.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信