Zheming Li, Hengwei Zhang, Junqiang Ma, Bo Yang, Chenwei Li, Jingwen Li
{"title":"Rotation Transformation: A Method to Improve the Transferability of Adversarial Examples","authors":"Zheming Li, Hengwei Zhang, Junqiang Ma, Bo Yang, Chenwei Li, Jingwen Li","doi":"10.1109/dsins54396.2021.9670580","DOIUrl":null,"url":null,"abstract":"Convolutional neural network models are fragile to adversarial examples. Adding disturbances that humans cannot observe in clean images can make the model classification error. Among the adversarial attack methods, white-box attacks have achieved a high attack success rate, but the \"overfitting\" between the adversarial examples and the model has led to a low success rate of black-box attacks. To this end, this paper introduces the data augmentation method into the adversarial examples generation process, establishes a probability model to perform random rotation transformation on clean images, improves the mobility of adversarial examples, and improves the success rate of adversarial examples under black-box setting. The experimental results on ImageNet show that the RO-MI-FGSM method we proposed has a stronger attack effect, achieving a black-box attack success rate up to 80.3%.","PeriodicalId":243724,"journal":{"name":"2021 International Conference on Digital Society and Intelligent Systems (DSInS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Digital Society and Intelligent Systems (DSInS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/dsins54396.2021.9670580","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Convolutional neural network models are fragile to adversarial examples. Adding disturbances that humans cannot observe in clean images can make the model classification error. Among the adversarial attack methods, white-box attacks have achieved a high attack success rate, but the "overfitting" between the adversarial examples and the model has led to a low success rate of black-box attacks. To this end, this paper introduces the data augmentation method into the adversarial examples generation process, establishes a probability model to perform random rotation transformation on clean images, improves the mobility of adversarial examples, and improves the success rate of adversarial examples under black-box setting. The experimental results on ImageNet show that the RO-MI-FGSM method we proposed has a stronger attack effect, achieving a black-box attack success rate up to 80.3%.