{"title":"利用富残差模型检测对抗样本以提高CNN模型中的数据安全性","authors":"Kaijun Wu, Bo Tian, Yougang Wen, Xue Wang","doi":"10.1109/CCISP55629.2022.9974244","DOIUrl":null,"url":null,"abstract":"The convolution neural network (CNN) is vulnerable to the adversarial attack, because the attack can generate adversarial images to force the CNN to misclassify the original label of the clean image. To defend against the adversarial attack, we propose to detect the adversarial images first and then prevent feeding the adversarial image into the CNN model. In this paper, we employ a steganalysis based method based on rich residual models to detect adversarial images which are generated by the typical attacks including BIM and DEEPFOOL. The rich residual models not only reduce the influences from natural image contents, but also enhance the diversity of the feature. Two typical and complementary methods spatial rich model (SRM) and projected spatial rich model (PSRM) are used to extract the feature, where SRM finely capture the statistical changes on co-occurrence in a small neighborhood, and PSRM remedy the loss information caused by SRM. Experimental results on CIFAR-IO and ImageNet show that the proposed method obtained better performance than existing steganalysis methods for detecting adversarial images generated by BIM and DEEPFOOL attack. The research results are expected to improve the recognition ability of image adversarial samples in the convolutional neural network model, and support the data security of natural image content in image recognition.","PeriodicalId":431851,"journal":{"name":"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Detecting Adversarial Examples Using Rich Residual Models to Improve Data Security in CNN Models\",\"authors\":\"Kaijun Wu, Bo Tian, Yougang Wen, Xue Wang\",\"doi\":\"10.1109/CCISP55629.2022.9974244\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The convolution neural network (CNN) is vulnerable to the adversarial attack, because the attack can generate adversarial images to force the CNN to misclassify the original label of the clean image. To defend against the adversarial attack, we propose to detect the adversarial images first and then prevent feeding the adversarial image into the CNN model. In this paper, we employ a steganalysis based method based on rich residual models to detect adversarial images which are generated by the typical attacks including BIM and DEEPFOOL. The rich residual models not only reduce the influences from natural image contents, but also enhance the diversity of the feature. Two typical and complementary methods spatial rich model (SRM) and projected spatial rich model (PSRM) are used to extract the feature, where SRM finely capture the statistical changes on co-occurrence in a small neighborhood, and PSRM remedy the loss information caused by SRM. Experimental results on CIFAR-IO and ImageNet show that the proposed method obtained better performance than existing steganalysis methods for detecting adversarial images generated by BIM and DEEPFOOL attack. The research results are expected to improve the recognition ability of image adversarial samples in the convolutional neural network model, and support the data security of natural image content in image recognition.\",\"PeriodicalId\":431851,\"journal\":{\"name\":\"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCISP55629.2022.9974244\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCISP55629.2022.9974244","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Detecting Adversarial Examples Using Rich Residual Models to Improve Data Security in CNN Models
The convolution neural network (CNN) is vulnerable to the adversarial attack, because the attack can generate adversarial images to force the CNN to misclassify the original label of the clean image. To defend against the adversarial attack, we propose to detect the adversarial images first and then prevent feeding the adversarial image into the CNN model. In this paper, we employ a steganalysis based method based on rich residual models to detect adversarial images which are generated by the typical attacks including BIM and DEEPFOOL. The rich residual models not only reduce the influences from natural image contents, but also enhance the diversity of the feature. Two typical and complementary methods spatial rich model (SRM) and projected spatial rich model (PSRM) are used to extract the feature, where SRM finely capture the statistical changes on co-occurrence in a small neighborhood, and PSRM remedy the loss information caused by SRM. Experimental results on CIFAR-IO and ImageNet show that the proposed method obtained better performance than existing steganalysis methods for detecting adversarial images generated by BIM and DEEPFOOL attack. The research results are expected to improve the recognition ability of image adversarial samples in the convolutional neural network model, and support the data security of natural image content in image recognition.