探索可解释深度学习系统的黑盒对抗性攻击

IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yike Zhan , Baolin Zheng , Dongxin Liu , Boren Deng , Xu Yang
{"title":"探索可解释深度学习系统的黑盒对抗性攻击","authors":"Yike Zhan ,&nbsp;Baolin Zheng ,&nbsp;Dongxin Liu ,&nbsp;Boren Deng ,&nbsp;Xu Yang","doi":"10.1016/j.cviu.2025.104423","DOIUrl":null,"url":null,"abstract":"<div><div>Recent studies have empirically demonstrated that neural network interpretability is susceptible to malicious manipulations. However, existing attacks on Interpretable Deep Learning Systems (IDLSes) predominantly focus on the white-box setting, which is impractical for real-world applications. In this paper, we present the first attempt to attack IDLSes in more challenging and realistic black-box settings. We introduce a novel framework called Dual Black-box Adversarial Attack (DBAA) which can generate adversarial examples that are misclassified as the target class, while maintaining interpretations similar to their benign counterparts. In our method, adversarial examples are generated via black-box adversarial attacks and then refined using ADV-Plugin, a novel approach proposed in this paper, which employs single-pixel perturbation and an adaptive step-size algorithm to enhance explanation similarity with benign samples while preserving adversarial properties. We conduct extensive experiments on multiple datasets (CIFAR-10, ImageNet, and Caltech-101) and various combinations of classifiers and interpreters, comparing our approach against five baseline methods. Empirical results indicate that DBAA is comparable to regular adversarial attacks in compromising classifiers and significantly enhances interpretability deception. Specifically, DBAA achieves Intersection over Union (IoU) scores exceeding 0.5 across all interpreters, approximately doubling the performance of regular attacks, while concurrently reducing the average <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> distance between its attribution maps and those of benign samples by about 50%.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"259 ","pages":"Article 104423"},"PeriodicalIF":3.5000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring black-box adversarial attacks on Interpretable Deep Learning Systems\",\"authors\":\"Yike Zhan ,&nbsp;Baolin Zheng ,&nbsp;Dongxin Liu ,&nbsp;Boren Deng ,&nbsp;Xu Yang\",\"doi\":\"10.1016/j.cviu.2025.104423\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recent studies have empirically demonstrated that neural network interpretability is susceptible to malicious manipulations. However, existing attacks on Interpretable Deep Learning Systems (IDLSes) predominantly focus on the white-box setting, which is impractical for real-world applications. In this paper, we present the first attempt to attack IDLSes in more challenging and realistic black-box settings. We introduce a novel framework called Dual Black-box Adversarial Attack (DBAA) which can generate adversarial examples that are misclassified as the target class, while maintaining interpretations similar to their benign counterparts. In our method, adversarial examples are generated via black-box adversarial attacks and then refined using ADV-Plugin, a novel approach proposed in this paper, which employs single-pixel perturbation and an adaptive step-size algorithm to enhance explanation similarity with benign samples while preserving adversarial properties. We conduct extensive experiments on multiple datasets (CIFAR-10, ImageNet, and Caltech-101) and various combinations of classifiers and interpreters, comparing our approach against five baseline methods. Empirical results indicate that DBAA is comparable to regular adversarial attacks in compromising classifiers and significantly enhances interpretability deception. Specifically, DBAA achieves Intersection over Union (IoU) scores exceeding 0.5 across all interpreters, approximately doubling the performance of regular attacks, while concurrently reducing the average <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> distance between its attribution maps and those of benign samples by about 50%.</div></div>\",\"PeriodicalId\":50633,\"journal\":{\"name\":\"Computer Vision and Image Understanding\",\"volume\":\"259 \",\"pages\":\"Article 104423\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-06-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Vision and Image Understanding\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1077314225001468\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314225001468","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

最近的研究经验表明,神经网络的可解释性容易受到恶意操纵。然而,对可解释深度学习系统(IDLSes)的现有攻击主要集中在白盒设置上,这对于现实世界的应用来说是不切实际的。在本文中,我们首次尝试在更具挑战性和现实的黑盒设置中攻击idlse。我们引入了一种新的框架,称为双黑盒对抗性攻击(DBAA),它可以生成被错误分类为目标类的对抗性示例,同时保持与其良性对应类相似的解释。在我们的方法中,通过黑盒对抗性攻击生成对抗性示例,然后使用ADV-Plugin进行细化。ADV-Plugin是本文提出的一种新方法,它采用单像素扰动和自适应步长算法来增强与良性样本的解释相似性,同时保持对抗性。我们在多个数据集(CIFAR-10、ImageNet和Caltech-101)和分类器和解释器的各种组合上进行了广泛的实验,并将我们的方法与五种基线方法进行了比较。实证结果表明,DBAA在妥协分类器上与常规对抗性攻击相当,显著增强了可解释性欺骗。具体来说,DBAA在所有解释器中实现了超过0.5的交集(Intersection over Union, IoU)得分,使常规攻击的性能提高了约一倍,同时将其属性图与良性样本之间的平均l2距离减少了约50%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploring black-box adversarial attacks on Interpretable Deep Learning Systems
Recent studies have empirically demonstrated that neural network interpretability is susceptible to malicious manipulations. However, existing attacks on Interpretable Deep Learning Systems (IDLSes) predominantly focus on the white-box setting, which is impractical for real-world applications. In this paper, we present the first attempt to attack IDLSes in more challenging and realistic black-box settings. We introduce a novel framework called Dual Black-box Adversarial Attack (DBAA) which can generate adversarial examples that are misclassified as the target class, while maintaining interpretations similar to their benign counterparts. In our method, adversarial examples are generated via black-box adversarial attacks and then refined using ADV-Plugin, a novel approach proposed in this paper, which employs single-pixel perturbation and an adaptive step-size algorithm to enhance explanation similarity with benign samples while preserving adversarial properties. We conduct extensive experiments on multiple datasets (CIFAR-10, ImageNet, and Caltech-101) and various combinations of classifiers and interpreters, comparing our approach against five baseline methods. Empirical results indicate that DBAA is comparable to regular adversarial attacks in compromising classifiers and significantly enhances interpretability deception. Specifically, DBAA achieves Intersection over Union (IoU) scores exceeding 0.5 across all interpreters, approximately doubling the performance of regular attacks, while concurrently reducing the average 2 distance between its attribution maps and those of benign samples by about 50%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Vision and Image Understanding
Computer Vision and Image Understanding 工程技术-工程:电子与电气
CiteScore
7.80
自引率
4.40%
发文量
112
审稿时长
79 days
期刊介绍: The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views. Research Areas Include: • Theory • Early vision • Data structures and representations • Shape • Range • Motion • Matching and recognition • Architecture and languages • Vision systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信