Transferable targeted adversarial attack via multi-source perturbation generation and integration

IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Shihui Zhang , Shaojie Han , Sheng Yang , Xueqiang Han , Junbin Su , Gangzheng Zhai , Houlin Wang
{"title":"Transferable targeted adversarial attack via multi-source perturbation generation and integration","authors":"Shihui Zhang ,&nbsp;Shaojie Han ,&nbsp;Sheng Yang ,&nbsp;Xueqiang Han ,&nbsp;Junbin Su ,&nbsp;Gangzheng Zhai ,&nbsp;Houlin Wang","doi":"10.1016/j.jvcir.2025.104493","DOIUrl":null,"url":null,"abstract":"<div><div>With the rapid development of artificial intelligence, deep learning models have been applied in the field of society (e.g., video or image representation). However, due to the presence of adversarial examples, these models exhibit obvious fragility, which has become a major challenge restricting society development. Therefore, studying the generation process and achieving high transferability of adversarial examples are of utmost importance. In this paper, we propose a transferable targeted adversarial attack method called Multi-source Perturbation Generation and Integration (MPGI) to address the vulnerability and uncertainty of deep learning models. Specifically, MPGI consists of three critical designs to achieve targeted transferability of adversarial examples. Firstly, we propose a Collaborative Feature Fusion (CFF) component, which reduces the impact of original example feature on model classification by considering collaboration in feature fusion. Subsequently, we propose a Multi-scale Perturbation Dynamic Fusion (MPDF) module to fuse perturbations from different scales for enriching perturbation diversity. Finally, we innovatively investigate a novel Logit Margin with Penalty (LMP) loss to further enhance the misleading ability of the examples. The LMP, as a pluggable part, offers the potential to be leveraged by different approaches for boosting performance. In summary, MPGI can effectively achieve targeted attacks, expose the shortcomings of existing models, and promote the development of artificial intelligence in terms of security. Extensive experiments on ImageNet-Compatible and CIFAR-10 datasets demonstrate the superiority of the proposed method. For instance, the attack success rate increases by 17.6% and 17.0% compared to state-of-the-art method when transferred from DN-121 to Inc-v3 and MB-v2 models.</div></div>","PeriodicalId":54755,"journal":{"name":"Journal of Visual Communication and Image Representation","volume":"111 ","pages":"Article 104493"},"PeriodicalIF":3.1000,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Visual Communication and Image Representation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1047320325001075","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

With the rapid development of artificial intelligence, deep learning models have been applied in the field of society (e.g., video or image representation). However, due to the presence of adversarial examples, these models exhibit obvious fragility, which has become a major challenge restricting society development. Therefore, studying the generation process and achieving high transferability of adversarial examples are of utmost importance. In this paper, we propose a transferable targeted adversarial attack method called Multi-source Perturbation Generation and Integration (MPGI) to address the vulnerability and uncertainty of deep learning models. Specifically, MPGI consists of three critical designs to achieve targeted transferability of adversarial examples. Firstly, we propose a Collaborative Feature Fusion (CFF) component, which reduces the impact of original example feature on model classification by considering collaboration in feature fusion. Subsequently, we propose a Multi-scale Perturbation Dynamic Fusion (MPDF) module to fuse perturbations from different scales for enriching perturbation diversity. Finally, we innovatively investigate a novel Logit Margin with Penalty (LMP) loss to further enhance the misleading ability of the examples. The LMP, as a pluggable part, offers the potential to be leveraged by different approaches for boosting performance. In summary, MPGI can effectively achieve targeted attacks, expose the shortcomings of existing models, and promote the development of artificial intelligence in terms of security. Extensive experiments on ImageNet-Compatible and CIFAR-10 datasets demonstrate the superiority of the proposed method. For instance, the attack success rate increases by 17.6% and 17.0% compared to state-of-the-art method when transferred from DN-121 to Inc-v3 and MB-v2 models.
基于多源摄动生成与积分的可转移目标对抗攻击
随着人工智能的快速发展,深度学习模型已经应用于社会领域(如视频或图像表示)。然而,由于存在对抗性例子,这些模型表现出明显的脆弱性,这已成为制约社会发展的主要挑战。因此,研究对抗样例的生成过程,实现对抗样例的高可转移性是至关重要的。在本文中,我们提出了一种可转移的目标对抗性攻击方法,称为多源摄动生成和集成(MPGI),以解决深度学习模型的脆弱性和不确定性。具体来说,MPGI由三个关键设计组成,以实现对抗性示例的目标可转移性。首先,我们提出了协同特征融合(CFF)组件,该组件通过在特征融合中考虑协同性,降低了原始样本特征对模型分类的影响;随后,我们提出了一个多尺度微扰动态融合(MPDF)模块来融合不同尺度的微扰,以丰富微扰的多样性。最后,我们创新性地研究了一种新的Logit Margin with Penalty (LMP) loss,进一步增强了样本的误导能力。LMP作为一个可插拔的部件,提供了通过不同方法来提高性能的潜力。综上所述,MPGI可以有效实现针对性攻击,暴露现有模型的不足,在安全方面促进人工智能的发展。在ImageNet-Compatible和CIFAR-10数据集上的大量实验证明了该方法的优越性。例如,与最先进的方法相比,当从DN-121转移到Inc-v3和MB-v2模型时,攻击成功率分别增加了17.6%和17.0%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Visual Communication and Image Representation
Journal of Visual Communication and Image Representation 工程技术-计算机:软件工程
CiteScore
5.40
自引率
11.50%
发文量
188
审稿时长
9.9 months
期刊介绍: The Journal of Visual Communication and Image Representation publishes papers on state-of-the-art visual communication and image representation, with emphasis on novel technologies and theoretical work in this multidisciplinary area of pure and applied research. The field of visual communication and image representation is considered in its broadest sense and covers both digital and analog aspects as well as processing and communication in biological visual systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信