{"title":"Frequency-domain augmentation and multi-scale feature alignment for improving transferability of adversarial examples","authors":"Gui-Hong Li, Heng-Ru Zhang, Fan Min","doi":"10.1016/j.comnet.2025.111261","DOIUrl":null,"url":null,"abstract":"<div><div>Transfer-based adversarial attack implies that the same adversarial example can fool Deep Neural Networks (DNNs) with different architectures. Model-related approaches train a new surrogate model in local to generate adversarial examples. However, because DNNs with different architectures focus on diverse features within the same data, adversarial examples generated by surrogate models frequently exhibit poor transferability when the surrogate and target models have significant architectural differences. In this paper, we propose a Two-Stage Generation Framework (TSGF) through frequency-domain augmentation and multi-scale feature alignment to address this issue. In the stage of surrogate model training, we enable the surrogate model to capture various features of data through detail and diversity enhancement. Detail enhancement increases the weight of details in clean examples by a frequency-domain augmentation module. Diversity enhancement incorporates slight adversarial examples into the training process to increase the diversity of clean examples. In the stage of adversarial generation, we perturb the distinctive features that different models focus on to improve transferability by a multi-scale feature alignment attack technique. Specifically, we design a loss function using the intermediate multi-layer features of the surrogate model to maximize the difference between the features of clean and adversarial examples. We compare TSGF with a combination of three closely related surrogate model training schemes and the most relevant adversarial attack methods. Results show that TSGF improves transferability across significantly different architectures. The implementation of TSGF is available at <span><span>https://github.com/zhanghrswpu/TSGF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"264 ","pages":"Article 111261"},"PeriodicalIF":4.4000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625002294","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Transfer-based adversarial attack implies that the same adversarial example can fool Deep Neural Networks (DNNs) with different architectures. Model-related approaches train a new surrogate model in local to generate adversarial examples. However, because DNNs with different architectures focus on diverse features within the same data, adversarial examples generated by surrogate models frequently exhibit poor transferability when the surrogate and target models have significant architectural differences. In this paper, we propose a Two-Stage Generation Framework (TSGF) through frequency-domain augmentation and multi-scale feature alignment to address this issue. In the stage of surrogate model training, we enable the surrogate model to capture various features of data through detail and diversity enhancement. Detail enhancement increases the weight of details in clean examples by a frequency-domain augmentation module. Diversity enhancement incorporates slight adversarial examples into the training process to increase the diversity of clean examples. In the stage of adversarial generation, we perturb the distinctive features that different models focus on to improve transferability by a multi-scale feature alignment attack technique. Specifically, we design a loss function using the intermediate multi-layer features of the surrogate model to maximize the difference between the features of clean and adversarial examples. We compare TSGF with a combination of three closely related surrogate model training schemes and the most relevant adversarial attack methods. Results show that TSGF improves transferability across significantly different architectures. The implementation of TSGF is available at https://github.com/zhanghrswpu/TSGF.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.