{"title":"不平衡数据下分类器对对抗样本的鲁棒性","authors":"Wenqian Zhao, Han Li, Lingjuan Wu, Liangxuan Zhu, Xuelin Zhang, Yizhi Zhao","doi":"10.1109/icccs55155.2022.9846074","DOIUrl":null,"url":null,"abstract":"Adversarial examples (AE) are used to fool classifier recently, which poses great challenges for classifier design. Therefore, it is theoretically crucial to evaluate the robustness of classifier to AE for a better classifier design. In this paper, we provide a theoretical framework to analyze the robustness of classifier to AE under imbalanced dataset from the perspective of AUC (Area under the ROC curve), and derive an interpretable upper bound. Specifically, we illustrate the obtained upper bound of linear classifier, which indicates that the upper bound depends on the difficulty of the classification task and the risk of the classifier. Experimental results on MNIST and CIFAR-10 datasets show that the classifiers designed with pairwise surrogate losses of AUC are not robust to adversarial attack. The nonlinear classifier has a higher robustness to AE compared to the linear one, which indicates that more flexible classifier can be used to improve adversarial robustness.","PeriodicalId":121713,"journal":{"name":"2022 7th International Conference on Computer and Communication Systems (ICCCS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robustness of classifier to adversarial examples under imbalanced data\",\"authors\":\"Wenqian Zhao, Han Li, Lingjuan Wu, Liangxuan Zhu, Xuelin Zhang, Yizhi Zhao\",\"doi\":\"10.1109/icccs55155.2022.9846074\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial examples (AE) are used to fool classifier recently, which poses great challenges for classifier design. Therefore, it is theoretically crucial to evaluate the robustness of classifier to AE for a better classifier design. In this paper, we provide a theoretical framework to analyze the robustness of classifier to AE under imbalanced dataset from the perspective of AUC (Area under the ROC curve), and derive an interpretable upper bound. Specifically, we illustrate the obtained upper bound of linear classifier, which indicates that the upper bound depends on the difficulty of the classification task and the risk of the classifier. Experimental results on MNIST and CIFAR-10 datasets show that the classifiers designed with pairwise surrogate losses of AUC are not robust to adversarial attack. The nonlinear classifier has a higher robustness to AE compared to the linear one, which indicates that more flexible classifier can be used to improve adversarial robustness.\",\"PeriodicalId\":121713,\"journal\":{\"name\":\"2022 7th International Conference on Computer and Communication Systems (ICCCS)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 7th International Conference on Computer and Communication Systems (ICCCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icccs55155.2022.9846074\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Computer and Communication Systems (ICCCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icccs55155.2022.9846074","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
近年来,人们利用对抗样例(AE)来欺骗分类器,这对分类器的设计提出了很大的挑战。因此,从理论上讲,评估分类器对AE的鲁棒性对于更好的分类器设计至关重要。本文从AUC (Area under the ROC curve)的角度,提供了一个理论框架来分析不平衡数据集下分类器对AE的鲁棒性,并推导出一个可解释的上限。具体来说,我们举例说明了得到的线性分类器的上界,这表明上界取决于分类任务的难度和分类器的风险。在MNIST和CIFAR-10数据集上的实验结果表明,使用AUC的成对替代损失设计的分类器对对抗性攻击不具有鲁棒性。与线性分类器相比,非线性分类器对声发射具有更高的鲁棒性,这表明可以使用更灵活的分类器来提高对抗鲁棒性。
Robustness of classifier to adversarial examples under imbalanced data
Adversarial examples (AE) are used to fool classifier recently, which poses great challenges for classifier design. Therefore, it is theoretically crucial to evaluate the robustness of classifier to AE for a better classifier design. In this paper, we provide a theoretical framework to analyze the robustness of classifier to AE under imbalanced dataset from the perspective of AUC (Area under the ROC curve), and derive an interpretable upper bound. Specifically, we illustrate the obtained upper bound of linear classifier, which indicates that the upper bound depends on the difficulty of the classification task and the risk of the classifier. Experimental results on MNIST and CIFAR-10 datasets show that the classifiers designed with pairwise surrogate losses of AUC are not robust to adversarial attack. The nonlinear classifier has a higher robustness to AE compared to the linear one, which indicates that more flexible classifier can be used to improve adversarial robustness.