Importance Aware Undervolting for Robust Neural Network Training

IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Chen Zhang;Lening Wang;Xin Fu
{"title":"Importance Aware Undervolting for Robust Neural Network Training","authors":"Chen Zhang;Lening Wang;Xin Fu","doi":"10.1109/TSUSC.2025.3650602","DOIUrl":null,"url":null,"abstract":"Convolutional Neural Network (CNN) is a powerful tool that has been extensively applied to many different applications. However, recent developments at CNN have revealed its vulnerability against adversarial example attacks. By introducing visually undetectable noise to the input image, an adversarial example attack can cause the CNN classifier to make false predictions. Multiple approaches have been proposed to defend against adversarial samples, one of which focuses on injecting noise into CNN during training. However, the existing method cannot generate noise efficiently and introduces extra time and energy overhead. In this paper, we propose an Importance-Aware undervolting training framework to improve CNN robustness. The undervolting technique is employed during training for noise generation at negligible overhead. Meanwhile, we observe that the neuron importance and bit importance in hardware can be leveraged during undervolting CNN training for controllable and flexible noise injection, which improves robustness. We design a position-aware bit mapping method in the memory unit by allocating data bits based on significance. And an importance-aware processing element (PE) mapping is also proposed at the computation unit for noise restriction. Our approach regularizes the noise injected into CNN and serves as an efficient method to defend against adversarial example attacks with significant energy savings. The proposed framework is evaluated through both FPGA implementation and software simulation. The experiment results show that our importance-aware undervolting CNN training achieves 47.8% adversarial accuracy at PGD-10 attack and 47.0% training energy savings.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"11 2","pages":"147-157"},"PeriodicalIF":3.9000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Sustainable Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11328921/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/5 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Convolutional Neural Network (CNN) is a powerful tool that has been extensively applied to many different applications. However, recent developments at CNN have revealed its vulnerability against adversarial example attacks. By introducing visually undetectable noise to the input image, an adversarial example attack can cause the CNN classifier to make false predictions. Multiple approaches have been proposed to defend against adversarial samples, one of which focuses on injecting noise into CNN during training. However, the existing method cannot generate noise efficiently and introduces extra time and energy overhead. In this paper, we propose an Importance-Aware undervolting training framework to improve CNN robustness. The undervolting technique is employed during training for noise generation at negligible overhead. Meanwhile, we observe that the neuron importance and bit importance in hardware can be leveraged during undervolting CNN training for controllable and flexible noise injection, which improves robustness. We design a position-aware bit mapping method in the memory unit by allocating data bits based on significance. And an importance-aware processing element (PE) mapping is also proposed at the computation unit for noise restriction. Our approach regularizes the noise injected into CNN and serves as an efficient method to defend against adversarial example attacks with significant energy savings. The proposed framework is evaluated through both FPGA implementation and software simulation. The experiment results show that our importance-aware undervolting CNN training achieves 47.8% adversarial accuracy at PGD-10 attack and 47.0% training energy savings.
感知欠电压对鲁棒神经网络训练的重要性
卷积神经网络(CNN)是一种强大的工具,已被广泛应用于许多不同的应用。然而,CNN最近的发展已经揭示了它在对抗性示例攻击方面的脆弱性。通过在输入图像中引入视觉上无法检测到的噪声,对抗性示例攻击可以导致CNN分类器做出错误的预测。已经提出了多种方法来防御对抗样本,其中一种方法是在训练过程中向CNN注入噪声。然而,现有的方法不能有效地产生噪声,并且会带来额外的时间和能量开销。在本文中,我们提出了一个重要性感知欠电压训练框架来提高CNN的鲁棒性。在训练中采用了欠电压技术,用于在可忽略的架空下产生噪声。同时,我们观察到在欠电压CNN训练中可以利用硬件中的神经元重要性和位重要性来实现可控和灵活的噪声注入,从而提高鲁棒性。我们设计了一种基于显著性分配数据位的存储单元位置感知位映射方法。并在计算单元上提出了一种重要感知的处理单元映射,用于抑制噪声。我们的方法对注入CNN的噪声进行了正则化,并作为一种有效的方法来防御对抗性示例攻击,同时节省了大量的能量。通过FPGA实现和软件仿真对该框架进行了评估。实验结果表明,我们的重要性感知欠电压CNN训练在PGD-10攻击下的对抗准确率达到47.8%,训练能量节约47.0%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Sustainable Computing
IEEE Transactions on Sustainable Computing Mathematics-Control and Optimization
CiteScore
7.70
自引率
2.60%
发文量
54
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书