通过转换推理增强对抗可转移性

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jiaxin Hu , Jie Lin , Xiangyuan Yang , Hanlin Zhang , Peng Zhao
{"title":"通过转换推理增强对抗可转移性","authors":"Jiaxin Hu ,&nbsp;Jie Lin ,&nbsp;Xiangyuan Yang ,&nbsp;Hanlin Zhang ,&nbsp;Peng Zhao","doi":"10.1016/j.neunet.2025.107896","DOIUrl":null,"url":null,"abstract":"<div><div>The transferability of adversarial examples has become a crucial issue in black-box attacks. Input transformation techniques have shown considerable promise in enhancing transferability, but existing methods are often limited by their empirical nature, neglecting the wide spectrum of potential transformations. This may limit the transferability of adversarial examples. To address this issue, we propose a novel transformation variational inference attack(TVIA) to improve the diversity of transformations, which leverages variational inference (VI) to explore a broader set of input transformations, thus enriching the diversity of adversarial examples and enhancing their transferability across models. Unlike traditional empirical approaches, our method employs the variational inference of a Variational Autoencoder (VAE) model to explore potential transformations in the latent space, significantly expanding the range of image variations. We further enhance diversity by modifying the VAE’s sampling process, enabling the generation of more diverse adversarial examples. To stabilize the gradient direction during the attack process, we fuse transformed images with the original image and apply random noise. The experiment results on Cifar10, Cifar100, ImageNet datasets show that the average attack success rates (ASRs) of the adversarial examples generated by our TVIA surpass all existing attack methods. Specially, the ASR reaches 95.80 % when transferred from Inc-v3 to Inc-v4, demonstrating that our TVIA can effectively enhance the transferability of adversarial examples.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"192 ","pages":"Article 107896"},"PeriodicalIF":6.3000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing adversarial transferability via transformation inference\",\"authors\":\"Jiaxin Hu ,&nbsp;Jie Lin ,&nbsp;Xiangyuan Yang ,&nbsp;Hanlin Zhang ,&nbsp;Peng Zhao\",\"doi\":\"10.1016/j.neunet.2025.107896\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The transferability of adversarial examples has become a crucial issue in black-box attacks. Input transformation techniques have shown considerable promise in enhancing transferability, but existing methods are often limited by their empirical nature, neglecting the wide spectrum of potential transformations. This may limit the transferability of adversarial examples. To address this issue, we propose a novel transformation variational inference attack(TVIA) to improve the diversity of transformations, which leverages variational inference (VI) to explore a broader set of input transformations, thus enriching the diversity of adversarial examples and enhancing their transferability across models. Unlike traditional empirical approaches, our method employs the variational inference of a Variational Autoencoder (VAE) model to explore potential transformations in the latent space, significantly expanding the range of image variations. We further enhance diversity by modifying the VAE’s sampling process, enabling the generation of more diverse adversarial examples. To stabilize the gradient direction during the attack process, we fuse transformed images with the original image and apply random noise. The experiment results on Cifar10, Cifar100, ImageNet datasets show that the average attack success rates (ASRs) of the adversarial examples generated by our TVIA surpass all existing attack methods. Specially, the ASR reaches 95.80 % when transferred from Inc-v3 to Inc-v4, demonstrating that our TVIA can effectively enhance the transferability of adversarial examples.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"192 \",\"pages\":\"Article 107896\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025007774\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025007774","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

对抗性样本的可转移性已经成为黑盒攻击中的一个关键问题。投入转换技术在提高可转移性方面显示出相当大的希望,但现有方法往往受到其经验性质的限制,忽视了潜在转换的广泛范围。这可能会限制对抗性示例的可转移性。为了解决这个问题,我们提出了一种新的变换变分推理攻击(TVIA)来提高变换的多样性,它利用变分推理(VI)来探索更广泛的输入变换集,从而丰富了对抗示例的多样性并增强了它们在模型之间的可转移性。与传统的经验方法不同,我们的方法采用变分自编码器(VAE)模型的变分推理来探索潜在空间中的潜在变换,显著扩大了图像变化的范围。我们通过修改VAE的采样过程进一步增强了多样性,从而能够生成更多样化的对抗性样本。为了在攻击过程中稳定梯度方向,我们将变换后的图像与原始图像融合,并施加随机噪声。在Cifar10, Cifar100, ImageNet数据集上的实验结果表明,我们的TVIA生成的对抗样本的平均攻击成功率(ASRs)超过了所有现有的攻击方法。特别是,从Inc-v3到Inc-v4的转换ASR达到95.80%,表明我们的TVIA可以有效地提高对抗性样例的可转移性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing adversarial transferability via transformation inference
The transferability of adversarial examples has become a crucial issue in black-box attacks. Input transformation techniques have shown considerable promise in enhancing transferability, but existing methods are often limited by their empirical nature, neglecting the wide spectrum of potential transformations. This may limit the transferability of adversarial examples. To address this issue, we propose a novel transformation variational inference attack(TVIA) to improve the diversity of transformations, which leverages variational inference (VI) to explore a broader set of input transformations, thus enriching the diversity of adversarial examples and enhancing their transferability across models. Unlike traditional empirical approaches, our method employs the variational inference of a Variational Autoencoder (VAE) model to explore potential transformations in the latent space, significantly expanding the range of image variations. We further enhance diversity by modifying the VAE’s sampling process, enabling the generation of more diverse adversarial examples. To stabilize the gradient direction during the attack process, we fuse transformed images with the original image and apply random noise. The experiment results on Cifar10, Cifar100, ImageNet datasets show that the average attack success rates (ASRs) of the adversarial examples generated by our TVIA surpass all existing attack methods. Specially, the ASR reaches 95.80 % when transferred from Inc-v3 to Inc-v4, demonstrating that our TVIA can effectively enhance the transferability of adversarial examples.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信