Disentangling Ensemble Models on Adversarial Generalization in Image Classification

Chenwei Li, Mengyuan Pan, Bo Yang, Hengwei Zhang
{"title":"Disentangling Ensemble Models on Adversarial Generalization in Image Classification","authors":"Chenwei Li, Mengyuan Pan, Bo Yang, Hengwei Zhang","doi":"10.1109/EEI59236.2023.10212535","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks are widely used in computer vision and image processing. However, when the original input is added with manually imperceptible perturbations, these deep network models mostly tend to output incorrect predictions. The vulnerability of these models poses great threat to intelligent applications, and these manually imperceptible perturbations are called adversarial examples. Current baseline methods have achieved considerable white-box attack success rate, but black-box rate remains to be improved. To boost the adversarial generalization, ensemble models method is introduced to the process of generating adversarial examples. This paper proposes multiple ensemble strategies with baseline attack methods based on existing ensemble strategy used by former methods. Experiment on ImageNet dataset empirically verifies the optimal ensemble strategy on boosting adversarial generalization.","PeriodicalId":363603,"journal":{"name":"2023 5th International Conference on Electronic Engineering and Informatics (EEI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 5th International Conference on Electronic Engineering and Informatics (EEI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EEI59236.2023.10212535","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Convolutional neural networks are widely used in computer vision and image processing. However, when the original input is added with manually imperceptible perturbations, these deep network models mostly tend to output incorrect predictions. The vulnerability of these models poses great threat to intelligent applications, and these manually imperceptible perturbations are called adversarial examples. Current baseline methods have achieved considerable white-box attack success rate, but black-box rate remains to be improved. To boost the adversarial generalization, ensemble models method is introduced to the process of generating adversarial examples. This paper proposes multiple ensemble strategies with baseline attack methods based on existing ensemble strategy used by former methods. Experiment on ImageNet dataset empirically verifies the optimal ensemble strategy on boosting adversarial generalization.
基于对抗泛化的图像分类集成模型解缠
卷积神经网络在计算机视觉和图像处理中有着广泛的应用。然而,当原始输入加上人工难以察觉的扰动时,这些深度网络模型大多倾向于输出不正确的预测。这些模型的脆弱性对智能应用构成了巨大的威胁,这些人工难以察觉的扰动被称为对抗性示例。目前的基线方法已经取得了相当大的白盒攻击成功率,但黑盒攻击成功率仍有待提高。为了提高对抗泛化能力,将集成模型方法引入到对抗实例的生成过程中。本文在现有集成策略的基础上,提出了基于基线攻击方法的多种集成策略。在ImageNet数据集上的实验验证了最优集成策略对对抗泛化的促进作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信