{"title":"生成对抗性示例的快速梯度缩放方法","authors":"Zhefeng Xu, Zhijian Luo, Jinlong Mu","doi":"10.1145/3529466.3529497","DOIUrl":null,"url":null,"abstract":"Though deep neural networks have achieved great success on many challenging tasks, they are demonstrated to be vulnerable to adversarial examples, which fool neural networks by adding human-imperceptible perturbations to the clean examples. As the first generation attack for generating adversarial examples, FGSM has inspired many follow-up attacks. However, the adversarial perturbations generated by FGSM are usually human-perceptible because FGSM modifies the pixels by the same amplitude through computing the sign of the gradients of the loss. To this end, we propose the fast gradient scaled method (FGScaledM), which scales the gradients of the loss to the valid range and can make adversarial perturbation to be more human-imperceptible. Extensive experiments on MNIST and CIFAR-10 datasets show that while maintaining similar attack success rates, our proposed FGScaledM can generate more fine-grained and more human-imperceptible adversarial perturbations than FGSM.","PeriodicalId":375562,"journal":{"name":"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fast Gradient Scaled Method for Generating Adversarial Examples\",\"authors\":\"Zhefeng Xu, Zhijian Luo, Jinlong Mu\",\"doi\":\"10.1145/3529466.3529497\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Though deep neural networks have achieved great success on many challenging tasks, they are demonstrated to be vulnerable to adversarial examples, which fool neural networks by adding human-imperceptible perturbations to the clean examples. As the first generation attack for generating adversarial examples, FGSM has inspired many follow-up attacks. However, the adversarial perturbations generated by FGSM are usually human-perceptible because FGSM modifies the pixels by the same amplitude through computing the sign of the gradients of the loss. To this end, we propose the fast gradient scaled method (FGScaledM), which scales the gradients of the loss to the valid range and can make adversarial perturbation to be more human-imperceptible. Extensive experiments on MNIST and CIFAR-10 datasets show that while maintaining similar attack success rates, our proposed FGScaledM can generate more fine-grained and more human-imperceptible adversarial perturbations than FGSM.\",\"PeriodicalId\":375562,\"journal\":{\"name\":\"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3529466.3529497\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 6th International Conference on Innovation in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529466.3529497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fast Gradient Scaled Method for Generating Adversarial Examples
Though deep neural networks have achieved great success on many challenging tasks, they are demonstrated to be vulnerable to adversarial examples, which fool neural networks by adding human-imperceptible perturbations to the clean examples. As the first generation attack for generating adversarial examples, FGSM has inspired many follow-up attacks. However, the adversarial perturbations generated by FGSM are usually human-perceptible because FGSM modifies the pixels by the same amplitude through computing the sign of the gradients of the loss. To this end, we propose the fast gradient scaled method (FGScaledM), which scales the gradients of the loss to the valid range and can make adversarial perturbation to be more human-imperceptible. Extensive experiments on MNIST and CIFAR-10 datasets show that while maintaining similar attack success rates, our proposed FGScaledM can generate more fine-grained and more human-imperceptible adversarial perturbations than FGSM.