Ran Ran;Jiwei Wei;Chaoning Zhang;Guoqing Wang;Yang Yang;Heng Tao Shen
{"title":"Adaptive Multi-scale Degradation-Based Attack for Boosting the Adversarial Transferability","authors":"Ran Ran;Jiwei Wei;Chaoning Zhang;Guoqing Wang;Yang Yang;Heng Tao Shen","doi":"10.1109/TMM.2024.3428311","DOIUrl":null,"url":null,"abstract":"The vulnerability of deep neural networks to adversarial examples has raised huge concerns about the security of these algorithms. Black-box adversarial attacks have received a lot of attention as an influential method for evaluating model robustness. While various sophisticated adversarial attack methods have been proposed, the success rate in the black-box scenario still needs to be improved. To address these issues, we develop an Adaptive Multi-scale Degradation-based Attack method called \n<bold>AMDA</b>\n. The intuitive motivation behind our approach is that different models tend to have similar attention regions for low-scale images. Specifically, AMDA uses degraded images to generate perturbations at different scales and fuses these perturbations to generate adversarial examples that are insensitive to model changes. Furthermore, we design an adaptive multi-scale perturbation fusion that evaluates the transferability of perturbations at different scales based on noise and adaptively allocates fusion weights to prioritize strong transferability attacks and avoid being compromised by local optima. Extensive experimental results on the ImageNet, CIFAR-100, and CIFAR-10 datasets demonstrate that the proposed AMDA algorithm exhibits competitive performance for both normally trained models and defense models.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"26 ","pages":"10979-10990"},"PeriodicalIF":8.4000,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10607921/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The vulnerability of deep neural networks to adversarial examples has raised huge concerns about the security of these algorithms. Black-box adversarial attacks have received a lot of attention as an influential method for evaluating model robustness. While various sophisticated adversarial attack methods have been proposed, the success rate in the black-box scenario still needs to be improved. To address these issues, we develop an Adaptive Multi-scale Degradation-based Attack method called
AMDA
. The intuitive motivation behind our approach is that different models tend to have similar attention regions for low-scale images. Specifically, AMDA uses degraded images to generate perturbations at different scales and fuses these perturbations to generate adversarial examples that are insensitive to model changes. Furthermore, we design an adaptive multi-scale perturbation fusion that evaluates the transferability of perturbations at different scales based on noise and adaptively allocates fusion weights to prioritize strong transferability attacks and avoid being compromised by local optima. Extensive experimental results on the ImageNet, CIFAR-100, and CIFAR-10 datasets demonstrate that the proposed AMDA algorithm exhibits competitive performance for both normally trained models and defense models.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.