利用特征增强提高对抗性示例的可转移性

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Donghua Wang;Wen Yao;Tingsong Jiang;Xiaohu Zheng;Junqi Wu
{"title":"利用特征增强提高对抗性示例的可转移性","authors":"Donghua Wang;Wen Yao;Tingsong Jiang;Xiaohu Zheng;Junqi Wu","doi":"10.1109/TNNLS.2025.3563855","DOIUrl":null,"url":null,"abstract":"Adversarial transferability is a significant property of adversarial examples, which renders the adversarial example capable of attacking unknown models. However, the models with different architectures on the same task would concentrate on different information, which weakens adversarial transferability. To enhance the adversarial transferability, input transformation-based attacks perform random transformation over input to find a better result that can resist such transformations, but these methods ignore the model discrepancy; ensemble attacks fuse multiple models to shrink the search space to ensure that the found adversarial examples work on these models, but ensemble attacks are resource-intensive. In this article, we propose a simple but effective feature augmentation attack (FAUG) method to improve adversarial transferability. We dynamically add random noise to intermediate features of the target model during the generation of adversarial examples, thereby avoiding overfitting the target model. Specifically, we first explore the noise tolerance of the model and disclose the discrepancy under different layers and noise strengths. Then, based on that analysis, we devise a dynamic random noise generation method, which determines noise strength according to the produced features in the mini-batch. Finally, we exploit the gradient-based attack algorithm on featureaugmented models, resulting in better adversarial transferability without introducing extra computation costs. Extensive experiments conducted on the ImageNet dataset across CNN and Transformer models corroborate the efficacy of our method, e.g., we achieve improvement of +30.67% and +5.57% on input transformation-based attacks and combination methods, respectively.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 9","pages":"17212-17226"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving the Transferability of Adversarial Examples by Feature Augmentation\",\"authors\":\"Donghua Wang;Wen Yao;Tingsong Jiang;Xiaohu Zheng;Junqi Wu\",\"doi\":\"10.1109/TNNLS.2025.3563855\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial transferability is a significant property of adversarial examples, which renders the adversarial example capable of attacking unknown models. However, the models with different architectures on the same task would concentrate on different information, which weakens adversarial transferability. To enhance the adversarial transferability, input transformation-based attacks perform random transformation over input to find a better result that can resist such transformations, but these methods ignore the model discrepancy; ensemble attacks fuse multiple models to shrink the search space to ensure that the found adversarial examples work on these models, but ensemble attacks are resource-intensive. In this article, we propose a simple but effective feature augmentation attack (FAUG) method to improve adversarial transferability. We dynamically add random noise to intermediate features of the target model during the generation of adversarial examples, thereby avoiding overfitting the target model. Specifically, we first explore the noise tolerance of the model and disclose the discrepancy under different layers and noise strengths. Then, based on that analysis, we devise a dynamic random noise generation method, which determines noise strength according to the produced features in the mini-batch. Finally, we exploit the gradient-based attack algorithm on featureaugmented models, resulting in better adversarial transferability without introducing extra computation costs. Extensive experiments conducted on the ImageNet dataset across CNN and Transformer models corroborate the efficacy of our method, e.g., we achieve improvement of +30.67% and +5.57% on input transformation-based attacks and combination methods, respectively.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"36 9\",\"pages\":\"17212-17226\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-03-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10993300/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10993300/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

对抗性可转移性是对抗性示例的一个重要特性,它使对抗性示例能够攻击未知模型。然而,同一任务上具有不同体系结构的模型会集中在不同的信息上,这削弱了对抗可转移性。为了增强对抗可转移性,基于输入变换的攻击对输入进行随机变换,以找到更好的抵抗这种变换的结果,但这些方法忽略了模型差异;集成攻击融合多个模型来缩小搜索空间,以确保找到的对抗性示例在这些模型上工作,但是集成攻击是资源密集型的。在本文中,我们提出了一种简单而有效的特征增强攻击(FAUG)方法来提高对抗可转移性。在生成对抗样例的过程中,我们动态地在目标模型的中间特征中加入随机噪声,从而避免了目标模型的过拟合。具体而言,我们首先探索了模型的噪声容限,揭示了不同层次和噪声强度下的差异。然后,在此基础上,设计了一种动态随机噪声生成方法,该方法根据小批量生产的特征确定噪声强度。最后,我们在特征增强模型上利用基于梯度的攻击算法,在不引入额外计算成本的情况下获得了更好的对抗可转移性。在CNN和Transformer模型的ImageNet数据集上进行的大量实验证实了我们方法的有效性,例如,我们在基于输入变换的攻击和组合方法上分别实现了+30.67%和+5.57%的改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving the Transferability of Adversarial Examples by Feature Augmentation
Adversarial transferability is a significant property of adversarial examples, which renders the adversarial example capable of attacking unknown models. However, the models with different architectures on the same task would concentrate on different information, which weakens adversarial transferability. To enhance the adversarial transferability, input transformation-based attacks perform random transformation over input to find a better result that can resist such transformations, but these methods ignore the model discrepancy; ensemble attacks fuse multiple models to shrink the search space to ensure that the found adversarial examples work on these models, but ensemble attacks are resource-intensive. In this article, we propose a simple but effective feature augmentation attack (FAUG) method to improve adversarial transferability. We dynamically add random noise to intermediate features of the target model during the generation of adversarial examples, thereby avoiding overfitting the target model. Specifically, we first explore the noise tolerance of the model and disclose the discrepancy under different layers and noise strengths. Then, based on that analysis, we devise a dynamic random noise generation method, which determines noise strength according to the produced features in the mini-batch. Finally, we exploit the gradient-based attack algorithm on featureaugmented models, resulting in better adversarial transferability without introducing extra computation costs. Extensive experiments conducted on the ImageNet dataset across CNN and Transformer models corroborate the efficacy of our method, e.g., we achieve improvement of +30.67% and +5.57% on input transformation-based attacks and combination methods, respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信