{"title":"Adaptive patch transformation for adversarial defense","authors":"Xin Zhang , Shijie Xiao , Han Zhang , Lixia Ji","doi":"10.1016/j.cose.2025.104368","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning models are vulnerable to adversarial attacks. Although various defense methods have been proposed, such as incorporating perturbations during training, removing them in preprocessing steps or using image-to-image mapping to counter these attacks, these methods often struggle to robustly defend against diverse adversarial attacks and may affect the model’s predictions on normal samples. To address this issue, we propose an adversarial example defense method based on image transformation. First, we designed an image transformation combiner that integrates multiple image transformations for defending against adversarial examples, thereby enhancing the robustness of the method. Second, we divide the image into patches and apply different combinations of image transformations to each patch to ensure the retention of useful information and increase the flexibility of the transformations. We combined 12 geometric or color transformations using the image transformation combiner and tested it on adversarial examples generated from the MNIST, CIFAR - 10, and ImageNet datasets. Experimental results show that our method outperforms other advanced detection methods in terms of accuracy and effectively mitigates the impact of adversarial perturbations on the model.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"153 ","pages":"Article 104368"},"PeriodicalIF":4.8000,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404825000574","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning models are vulnerable to adversarial attacks. Although various defense methods have been proposed, such as incorporating perturbations during training, removing them in preprocessing steps or using image-to-image mapping to counter these attacks, these methods often struggle to robustly defend against diverse adversarial attacks and may affect the model’s predictions on normal samples. To address this issue, we propose an adversarial example defense method based on image transformation. First, we designed an image transformation combiner that integrates multiple image transformations for defending against adversarial examples, thereby enhancing the robustness of the method. Second, we divide the image into patches and apply different combinations of image transformations to each patch to ensure the retention of useful information and increase the flexibility of the transformations. We combined 12 geometric or color transformations using the image transformation combiner and tested it on adversarial examples generated from the MNIST, CIFAR - 10, and ImageNet datasets. Experimental results show that our method outperforms other advanced detection methods in terms of accuracy and effectively mitigates the impact of adversarial perturbations on the model.
期刊介绍:
Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world.
Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.