针对对抗性例子的流动防御

Shenghong He, Chao Yi, Zongheng Zongheng, Yunyun Dong
{"title":"针对对抗性例子的流动防御","authors":"Shenghong He, Chao Yi, Zongheng Zongheng, Yunyun Dong","doi":"10.1109/AIAM54119.2021.00059","DOIUrl":null,"url":null,"abstract":"Recent studies have shown that deep neural networks are susceptible to interference from adversarial examples. Adversarial examples are adding imperceptible noise to the data. Currently, there are many types of adversarial examples in image classification, and these adversarial examples can easily lead to DNN misclassification. Therefore, it is essential to design AEs detection methods to allow them to be rejected. In the paper, we propose Flow-Pronged Defense (FPD) for adversarial examples, which is a framework for protecting neural network classification models from adversarial examples. FPD does not need to modify the protected classifier, which includes a FLOW model and a residual network classifier. The Flow model transforms the adversarial examples so that the classifier can better classify the adversarial examples and clean examples. The residual network strengthens the difference between disturbance and clean data through cross-layer connections. Compared with the state-of-the-art method, many experiments show that FPD has higher accuracy and generalization ability.","PeriodicalId":227320,"journal":{"name":"2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Flow-Pronged Defense against Adversarial Examples\",\"authors\":\"Shenghong He, Chao Yi, Zongheng Zongheng, Yunyun Dong\",\"doi\":\"10.1109/AIAM54119.2021.00059\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent studies have shown that deep neural networks are susceptible to interference from adversarial examples. Adversarial examples are adding imperceptible noise to the data. Currently, there are many types of adversarial examples in image classification, and these adversarial examples can easily lead to DNN misclassification. Therefore, it is essential to design AEs detection methods to allow them to be rejected. In the paper, we propose Flow-Pronged Defense (FPD) for adversarial examples, which is a framework for protecting neural network classification models from adversarial examples. FPD does not need to modify the protected classifier, which includes a FLOW model and a residual network classifier. The Flow model transforms the adversarial examples so that the classifier can better classify the adversarial examples and clean examples. The residual network strengthens the difference between disturbance and clean data through cross-layer connections. Compared with the state-of-the-art method, many experiments show that FPD has higher accuracy and generalization ability.\",\"PeriodicalId\":227320,\"journal\":{\"name\":\"2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIAM54119.2021.00059\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIAM54119.2021.00059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最近的研究表明,深度神经网络容易受到对抗性示例的干扰。对抗性的例子给数据增加了难以察觉的噪音。目前,图像分类中存在许多类型的对抗样本,这些对抗样本很容易导致DNN误分类。因此,有必要设计AEs检测方法,使其能够被拒绝。在本文中,我们提出了针对对抗示例的flow - prong Defense (FPD),这是一种保护神经网络分类模型免受对抗示例攻击的框架。FPD不需要修改受保护的分类器,其中包括FLOW模型和残余网络分类器。Flow模型对对抗样例进行变换,使分类器能够更好地对对抗样例和干净样例进行分类。残差网络通过跨层连接加强干扰数据和干净数据之间的差异。与现有方法相比,大量实验表明FPD具有更高的精度和泛化能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Flow-Pronged Defense against Adversarial Examples
Recent studies have shown that deep neural networks are susceptible to interference from adversarial examples. Adversarial examples are adding imperceptible noise to the data. Currently, there are many types of adversarial examples in image classification, and these adversarial examples can easily lead to DNN misclassification. Therefore, it is essential to design AEs detection methods to allow them to be rejected. In the paper, we propose Flow-Pronged Defense (FPD) for adversarial examples, which is a framework for protecting neural network classification models from adversarial examples. FPD does not need to modify the protected classifier, which includes a FLOW model and a residual network classifier. The Flow model transforms the adversarial examples so that the classifier can better classify the adversarial examples and clean examples. The residual network strengthens the difference between disturbance and clean data through cross-layer connections. Compared with the state-of-the-art method, many experiments show that FPD has higher accuracy and generalization ability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信