{"title":"自适应压缩对抗摄动的鲁棒攻击","authors":"Jinping Su, L. Jing","doi":"10.1109/ICCEA53728.2021.00071","DOIUrl":null,"url":null,"abstract":"Adversarial examples expose the vulnerability of deep neural networks that perform well in various fields. However, adversarial perturbations crafted by the existing attack methods are often aimed at the whole image. They are usually random, and the human eye can even easily perceive some of them. This paper proposes an adaptive method to compress the adversarial perturbation. Under the premise of ensuring the success of attacks, generating perturbations as small as possible to change the decision of classifiers. First, the authors find the minimum point of loss function by the optimization method, to expand the spanning space of adversarial examples. Calculating and selecting the smaller perturbation between this point and the original input. Then, in order to retain the useful perturbation and remove redundancy, the authors look for important regions in the input data that determine the network predict results, and construct an importance mask for the smaller perturbation of the previous stage. Extensive experiments on the ImageNet dataset and multiple network classifiers show that our method is effective. Compared with advanced attack methods, the $\\mathbf{L}_{2}$ distance of adversarial perturbation obtained by our method is smaller and more practical, and the generated adversarial examples have strong transferability.","PeriodicalId":325790,"journal":{"name":"2021 International Conference on Computer Engineering and Application (ICCEA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust Attack with Adaptive Compress Adversarial Perturbations\",\"authors\":\"Jinping Su, L. Jing\",\"doi\":\"10.1109/ICCEA53728.2021.00071\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial examples expose the vulnerability of deep neural networks that perform well in various fields. However, adversarial perturbations crafted by the existing attack methods are often aimed at the whole image. They are usually random, and the human eye can even easily perceive some of them. This paper proposes an adaptive method to compress the adversarial perturbation. Under the premise of ensuring the success of attacks, generating perturbations as small as possible to change the decision of classifiers. First, the authors find the minimum point of loss function by the optimization method, to expand the spanning space of adversarial examples. Calculating and selecting the smaller perturbation between this point and the original input. Then, in order to retain the useful perturbation and remove redundancy, the authors look for important regions in the input data that determine the network predict results, and construct an importance mask for the smaller perturbation of the previous stage. Extensive experiments on the ImageNet dataset and multiple network classifiers show that our method is effective. Compared with advanced attack methods, the $\\\\mathbf{L}_{2}$ distance of adversarial perturbation obtained by our method is smaller and more practical, and the generated adversarial examples have strong transferability.\",\"PeriodicalId\":325790,\"journal\":{\"name\":\"2021 International Conference on Computer Engineering and Application (ICCEA)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Computer Engineering and Application (ICCEA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCEA53728.2021.00071\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Computer Engineering and Application (ICCEA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCEA53728.2021.00071","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Attack with Adaptive Compress Adversarial Perturbations
Adversarial examples expose the vulnerability of deep neural networks that perform well in various fields. However, adversarial perturbations crafted by the existing attack methods are often aimed at the whole image. They are usually random, and the human eye can even easily perceive some of them. This paper proposes an adaptive method to compress the adversarial perturbation. Under the premise of ensuring the success of attacks, generating perturbations as small as possible to change the decision of classifiers. First, the authors find the minimum point of loss function by the optimization method, to expand the spanning space of adversarial examples. Calculating and selecting the smaller perturbation between this point and the original input. Then, in order to retain the useful perturbation and remove redundancy, the authors look for important regions in the input data that determine the network predict results, and construct an importance mask for the smaller perturbation of the previous stage. Extensive experiments on the ImageNet dataset and multiple network classifiers show that our method is effective. Compared with advanced attack methods, the $\mathbf{L}_{2}$ distance of adversarial perturbation obtained by our method is smaller and more practical, and the generated adversarial examples have strong transferability.