Robustness of classifier to adversarial examples under imbalanced data

Wenqian Zhao, Han Li, Lingjuan Wu, Liangxuan Zhu, Xuelin Zhang, Yizhi Zhao
{"title":"Robustness of classifier to adversarial examples under imbalanced data","authors":"Wenqian Zhao, Han Li, Lingjuan Wu, Liangxuan Zhu, Xuelin Zhang, Yizhi Zhao","doi":"10.1109/icccs55155.2022.9846074","DOIUrl":null,"url":null,"abstract":"Adversarial examples (AE) are used to fool classifier recently, which poses great challenges for classifier design. Therefore, it is theoretically crucial to evaluate the robustness of classifier to AE for a better classifier design. In this paper, we provide a theoretical framework to analyze the robustness of classifier to AE under imbalanced dataset from the perspective of AUC (Area under the ROC curve), and derive an interpretable upper bound. Specifically, we illustrate the obtained upper bound of linear classifier, which indicates that the upper bound depends on the difficulty of the classification task and the risk of the classifier. Experimental results on MNIST and CIFAR-10 datasets show that the classifiers designed with pairwise surrogate losses of AUC are not robust to adversarial attack. The nonlinear classifier has a higher robustness to AE compared to the linear one, which indicates that more flexible classifier can be used to improve adversarial robustness.","PeriodicalId":121713,"journal":{"name":"2022 7th International Conference on Computer and Communication Systems (ICCCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Computer and Communication Systems (ICCCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icccs55155.2022.9846074","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Adversarial examples (AE) are used to fool classifier recently, which poses great challenges for classifier design. Therefore, it is theoretically crucial to evaluate the robustness of classifier to AE for a better classifier design. In this paper, we provide a theoretical framework to analyze the robustness of classifier to AE under imbalanced dataset from the perspective of AUC (Area under the ROC curve), and derive an interpretable upper bound. Specifically, we illustrate the obtained upper bound of linear classifier, which indicates that the upper bound depends on the difficulty of the classification task and the risk of the classifier. Experimental results on MNIST and CIFAR-10 datasets show that the classifiers designed with pairwise surrogate losses of AUC are not robust to adversarial attack. The nonlinear classifier has a higher robustness to AE compared to the linear one, which indicates that more flexible classifier can be used to improve adversarial robustness.
不平衡数据下分类器对对抗样本的鲁棒性
近年来,人们利用对抗样例(AE)来欺骗分类器,这对分类器的设计提出了很大的挑战。因此,从理论上讲,评估分类器对AE的鲁棒性对于更好的分类器设计至关重要。本文从AUC (Area under the ROC curve)的角度,提供了一个理论框架来分析不平衡数据集下分类器对AE的鲁棒性,并推导出一个可解释的上限。具体来说,我们举例说明了得到的线性分类器的上界,这表明上界取决于分类任务的难度和分类器的风险。在MNIST和CIFAR-10数据集上的实验结果表明,使用AUC的成对替代损失设计的分类器对对抗性攻击不具有鲁棒性。与线性分类器相比,非线性分类器对声发射具有更高的鲁棒性,这表明可以使用更灵活的分类器来提高对抗鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信