Jiaxin Hu , Jie Lin , Xiangyuan Yang , Hanlin Zhang , Peng Zhao
{"title":"通过转换推理增强对抗可转移性","authors":"Jiaxin Hu , Jie Lin , Xiangyuan Yang , Hanlin Zhang , Peng Zhao","doi":"10.1016/j.neunet.2025.107896","DOIUrl":null,"url":null,"abstract":"<div><div>The transferability of adversarial examples has become a crucial issue in black-box attacks. Input transformation techniques have shown considerable promise in enhancing transferability, but existing methods are often limited by their empirical nature, neglecting the wide spectrum of potential transformations. This may limit the transferability of adversarial examples. To address this issue, we propose a novel transformation variational inference attack(TVIA) to improve the diversity of transformations, which leverages variational inference (VI) to explore a broader set of input transformations, thus enriching the diversity of adversarial examples and enhancing their transferability across models. Unlike traditional empirical approaches, our method employs the variational inference of a Variational Autoencoder (VAE) model to explore potential transformations in the latent space, significantly expanding the range of image variations. We further enhance diversity by modifying the VAE’s sampling process, enabling the generation of more diverse adversarial examples. To stabilize the gradient direction during the attack process, we fuse transformed images with the original image and apply random noise. The experiment results on Cifar10, Cifar100, ImageNet datasets show that the average attack success rates (ASRs) of the adversarial examples generated by our TVIA surpass all existing attack methods. Specially, the ASR reaches 95.80 % when transferred from Inc-v3 to Inc-v4, demonstrating that our TVIA can effectively enhance the transferability of adversarial examples.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"192 ","pages":"Article 107896"},"PeriodicalIF":6.3000,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing adversarial transferability via transformation inference\",\"authors\":\"Jiaxin Hu , Jie Lin , Xiangyuan Yang , Hanlin Zhang , Peng Zhao\",\"doi\":\"10.1016/j.neunet.2025.107896\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The transferability of adversarial examples has become a crucial issue in black-box attacks. Input transformation techniques have shown considerable promise in enhancing transferability, but existing methods are often limited by their empirical nature, neglecting the wide spectrum of potential transformations. This may limit the transferability of adversarial examples. To address this issue, we propose a novel transformation variational inference attack(TVIA) to improve the diversity of transformations, which leverages variational inference (VI) to explore a broader set of input transformations, thus enriching the diversity of adversarial examples and enhancing their transferability across models. Unlike traditional empirical approaches, our method employs the variational inference of a Variational Autoencoder (VAE) model to explore potential transformations in the latent space, significantly expanding the range of image variations. We further enhance diversity by modifying the VAE’s sampling process, enabling the generation of more diverse adversarial examples. To stabilize the gradient direction during the attack process, we fuse transformed images with the original image and apply random noise. The experiment results on Cifar10, Cifar100, ImageNet datasets show that the average attack success rates (ASRs) of the adversarial examples generated by our TVIA surpass all existing attack methods. Specially, the ASR reaches 95.80 % when transferred from Inc-v3 to Inc-v4, demonstrating that our TVIA can effectively enhance the transferability of adversarial examples.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"192 \",\"pages\":\"Article 107896\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025007774\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025007774","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Enhancing adversarial transferability via transformation inference
The transferability of adversarial examples has become a crucial issue in black-box attacks. Input transformation techniques have shown considerable promise in enhancing transferability, but existing methods are often limited by their empirical nature, neglecting the wide spectrum of potential transformations. This may limit the transferability of adversarial examples. To address this issue, we propose a novel transformation variational inference attack(TVIA) to improve the diversity of transformations, which leverages variational inference (VI) to explore a broader set of input transformations, thus enriching the diversity of adversarial examples and enhancing their transferability across models. Unlike traditional empirical approaches, our method employs the variational inference of a Variational Autoencoder (VAE) model to explore potential transformations in the latent space, significantly expanding the range of image variations. We further enhance diversity by modifying the VAE’s sampling process, enabling the generation of more diverse adversarial examples. To stabilize the gradient direction during the attack process, we fuse transformed images with the original image and apply random noise. The experiment results on Cifar10, Cifar100, ImageNet datasets show that the average attack success rates (ASRs) of the adversarial examples generated by our TVIA surpass all existing attack methods. Specially, the ASR reaches 95.80 % when transferred from Inc-v3 to Inc-v4, demonstrating that our TVIA can effectively enhance the transferability of adversarial examples.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.