Rect-ViT: Rectified attention via feature attribution can improve the adversarial robustness of Vision Transformers

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xu Kang, Bin Song
{"title":"Rect-ViT: Rectified attention via feature attribution can improve the adversarial robustness of Vision Transformers","authors":"Xu Kang,&nbsp;Bin Song","doi":"10.1016/j.neunet.2025.107666","DOIUrl":null,"url":null,"abstract":"<div><div>Deep neural networks (DNNs) have suffered from input perturbations and adversarial examples (AEs) for a long time, mainly caused by the distribution difference between robust and non-robust features. Recent research shows that Vision Transformers (ViTs) are more robust than traditional convolutional neural networks (CNNs). We studied the relationship between the activation distribution and robust features in the attention mechanism in ViTs, coming up with a discrepancy in the token distribution between natural and adversarial examples during adversarial training (AT). When predicting AEs, some tokens irrelevant to the targets are still activated, giving rise to the extraction of non-robust features, which reduces the robustness of ViTs. Therefore, we propose Rect-ViT, which can rectify robust features based on class-relevant gradients. Performing the relevance back-propagation of auxiliary tokens during forward prediction can achieve rectification and alignment of token activation distributions, thereby improving the robustness of ViTs during AT. The proposed rectified attention mechanism can be adapted to a variety of mainstream ViT architectures. Along with traditional AT, Rect-ViT can also be effective in other AT modes like TRADES and MART, even for state-of-the-art AT approaches. Experimental results reveal that Rect-ViT improves average robust accuracy by 0.64% and 1.72% on CIFAR10 and Imagenette against four classic attack methods. These modest gains have significant practical implications in safety-critical applications and suggest potential effectiveness for complex visual tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"190 ","pages":"Article 107666"},"PeriodicalIF":6.3000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025005465","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Deep neural networks (DNNs) have suffered from input perturbations and adversarial examples (AEs) for a long time, mainly caused by the distribution difference between robust and non-robust features. Recent research shows that Vision Transformers (ViTs) are more robust than traditional convolutional neural networks (CNNs). We studied the relationship between the activation distribution and robust features in the attention mechanism in ViTs, coming up with a discrepancy in the token distribution between natural and adversarial examples during adversarial training (AT). When predicting AEs, some tokens irrelevant to the targets are still activated, giving rise to the extraction of non-robust features, which reduces the robustness of ViTs. Therefore, we propose Rect-ViT, which can rectify robust features based on class-relevant gradients. Performing the relevance back-propagation of auxiliary tokens during forward prediction can achieve rectification and alignment of token activation distributions, thereby improving the robustness of ViTs during AT. The proposed rectified attention mechanism can be adapted to a variety of mainstream ViT architectures. Along with traditional AT, Rect-ViT can also be effective in other AT modes like TRADES and MART, even for state-of-the-art AT approaches. Experimental results reveal that Rect-ViT improves average robust accuracy by 0.64% and 1.72% on CIFAR10 and Imagenette against four classic attack methods. These modest gains have significant practical implications in safety-critical applications and suggest potential effectiveness for complex visual tasks.
Rect-ViT:基于特征归因的注意力矫正可以提高视觉变形器的对抗鲁棒性
长期以来,深度神经网络(dnn)一直遭受输入扰动和对抗示例(AEs)的困扰,这主要是由鲁棒特征和非鲁棒特征之间的分布差异引起的。最近的研究表明,视觉变形(ViTs)比传统的卷积神经网络(cnn)具有更强的鲁棒性。研究了ViTs注意机制中激活分布与鲁棒性特征之间的关系,发现在对抗性训练中,自然样例与对抗性样例的标记分布存在差异。在预测ae时,仍然会激活一些与目标无关的令牌,从而产生非鲁棒特征的提取,从而降低了vit的鲁棒性。因此,我们提出了基于类相关梯度校正鲁棒特征的Rect-ViT。在前向预测期间对辅助令牌进行相关反向传播,可以实现令牌激活分布的校正和对齐,从而提高vit在AT期间的鲁棒性。所提出的修正注意力机制可以适应各种主流ViT架构。除了传统的AT之外,Rect-ViT在trade和MART等其他AT模式中也很有效,甚至对于最先进的AT方法也是如此。实验结果表明,Rect-ViT在CIFAR10和Imagenette上对四种经典攻击方法的平均鲁棒准确率分别提高了0.64%和1.72%。这些适度的增益在安全关键应用中具有重要的实际意义,并表明复杂视觉任务的潜在有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信