{"title":"基于查询和扰动分布的改进黑盒攻击","authors":"Weiwei Zhao, Z. Zeng","doi":"10.1109/ICACI52617.2021.9435907","DOIUrl":null,"url":null,"abstract":"Adversarial examples cause the deep neural network prediction error, which is a great threat to the deep neural network. How to generate more natural adversarial examples and improve the robustness of deep neural networks has received attention. In this paper, we propose an improved blackbox attack (IBBA) algorithm based on query and perturbation distribution. This algorithm only needs the top-l label of the attacked model to generate the adversarial examples. Based on the existing black-box attacks, we optimize the performance of the algorithm from two aspects: query distribution and perturbation distribution. In the aspect of query distribution, we adopt different strategies for nontargeted attack and targeted attack; in the aspect of perturbation distribution, we choose different low-frequency noise according to the difference between the targeted attack and nontargeted attack. The experimental results on ImageNet show that the proposed algorithm is better than the existing algorithms in low query number, and the targeted attack is better in each specified query number.","PeriodicalId":382483,"journal":{"name":"2021 13th International Conference on Advanced Computational Intelligence (ICACI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Improved black-box attack based on query and perturbation distribution\",\"authors\":\"Weiwei Zhao, Z. Zeng\",\"doi\":\"10.1109/ICACI52617.2021.9435907\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial examples cause the deep neural network prediction error, which is a great threat to the deep neural network. How to generate more natural adversarial examples and improve the robustness of deep neural networks has received attention. In this paper, we propose an improved blackbox attack (IBBA) algorithm based on query and perturbation distribution. This algorithm only needs the top-l label of the attacked model to generate the adversarial examples. Based on the existing black-box attacks, we optimize the performance of the algorithm from two aspects: query distribution and perturbation distribution. In the aspect of query distribution, we adopt different strategies for nontargeted attack and targeted attack; in the aspect of perturbation distribution, we choose different low-frequency noise according to the difference between the targeted attack and nontargeted attack. The experimental results on ImageNet show that the proposed algorithm is better than the existing algorithms in low query number, and the targeted attack is better in each specified query number.\",\"PeriodicalId\":382483,\"journal\":{\"name\":\"2021 13th International Conference on Advanced Computational Intelligence (ICACI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 13th International Conference on Advanced Computational Intelligence (ICACI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACI52617.2021.9435907\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 13th International Conference on Advanced Computational Intelligence (ICACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACI52617.2021.9435907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improved black-box attack based on query and perturbation distribution
Adversarial examples cause the deep neural network prediction error, which is a great threat to the deep neural network. How to generate more natural adversarial examples and improve the robustness of deep neural networks has received attention. In this paper, we propose an improved blackbox attack (IBBA) algorithm based on query and perturbation distribution. This algorithm only needs the top-l label of the attacked model to generate the adversarial examples. Based on the existing black-box attacks, we optimize the performance of the algorithm from two aspects: query distribution and perturbation distribution. In the aspect of query distribution, we adopt different strategies for nontargeted attack and targeted attack; in the aspect of perturbation distribution, we choose different low-frequency noise according to the difference between the targeted attack and nontargeted attack. The experimental results on ImageNet show that the proposed algorithm is better than the existing algorithms in low query number, and the targeted attack is better in each specified query number.