{"title":"Flow-Pronged Defense against Adversarial Examples","authors":"Shenghong He, Chao Yi, Zongheng Zongheng, Yunyun Dong","doi":"10.1109/AIAM54119.2021.00059","DOIUrl":null,"url":null,"abstract":"Recent studies have shown that deep neural networks are susceptible to interference from adversarial examples. Adversarial examples are adding imperceptible noise to the data. Currently, there are many types of adversarial examples in image classification, and these adversarial examples can easily lead to DNN misclassification. Therefore, it is essential to design AEs detection methods to allow them to be rejected. In the paper, we propose Flow-Pronged Defense (FPD) for adversarial examples, which is a framework for protecting neural network classification models from adversarial examples. FPD does not need to modify the protected classifier, which includes a FLOW model and a residual network classifier. The Flow model transforms the adversarial examples so that the classifier can better classify the adversarial examples and clean examples. The residual network strengthens the difference between disturbance and clean data through cross-layer connections. Compared with the state-of-the-art method, many experiments show that FPD has higher accuracy and generalization ability.","PeriodicalId":227320,"journal":{"name":"2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIAM54119.2021.00059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent studies have shown that deep neural networks are susceptible to interference from adversarial examples. Adversarial examples are adding imperceptible noise to the data. Currently, there are many types of adversarial examples in image classification, and these adversarial examples can easily lead to DNN misclassification. Therefore, it is essential to design AEs detection methods to allow them to be rejected. In the paper, we propose Flow-Pronged Defense (FPD) for adversarial examples, which is a framework for protecting neural network classification models from adversarial examples. FPD does not need to modify the protected classifier, which includes a FLOW model and a residual network classifier. The Flow model transforms the adversarial examples so that the classifier can better classify the adversarial examples and clean examples. The residual network strengthens the difference between disturbance and clean data through cross-layer connections. Compared with the state-of-the-art method, many experiments show that FPD has higher accuracy and generalization ability.