Kartik Mundra, Rahul Modpur, Arpan Chattopadhyay, I. Kar
{"title":"信息物理系统中的对抗图像检测","authors":"Kartik Mundra, Rahul Modpur, Arpan Chattopadhyay, I. Kar","doi":"10.1145/3377283.3377285","DOIUrl":null,"url":null,"abstract":"In this paper, detection of deception attack on deep neural network (DNN) based image classification in autonomous and cyber-physical systems is considered. Several studies have shown the vulnerability of DNN to malicious deception attack. In such attacks, some or all pixel values of an image are modified by an external attacker, so that the change is almost invisible to human eye but significant enough for a DNN-based classifier to misclassify it. This paper proposes a novel pre-processing technique that facilitates detection of such modified images under any DNN-based image classifier as well as attacker model. The proposed pre-processing algorithm involves a certain combination of principal component analysis (PCA)-based decomposition of the image, and random perturbation based detection to reduce computational complexity. Numerical experiments show that the proposed detection scheme outperforms a competing attack detection algorithm while achieving low false alarm rate and low computational complexity.","PeriodicalId":443854,"journal":{"name":"Proceedings of the 1st ACM Workshop on Autonomous and Intelligent Mobile Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Adversarial Image Detection in Cyber-Physical Systems\",\"authors\":\"Kartik Mundra, Rahul Modpur, Arpan Chattopadhyay, I. Kar\",\"doi\":\"10.1145/3377283.3377285\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, detection of deception attack on deep neural network (DNN) based image classification in autonomous and cyber-physical systems is considered. Several studies have shown the vulnerability of DNN to malicious deception attack. In such attacks, some or all pixel values of an image are modified by an external attacker, so that the change is almost invisible to human eye but significant enough for a DNN-based classifier to misclassify it. This paper proposes a novel pre-processing technique that facilitates detection of such modified images under any DNN-based image classifier as well as attacker model. The proposed pre-processing algorithm involves a certain combination of principal component analysis (PCA)-based decomposition of the image, and random perturbation based detection to reduce computational complexity. Numerical experiments show that the proposed detection scheme outperforms a competing attack detection algorithm while achieving low false alarm rate and low computational complexity.\",\"PeriodicalId\":443854,\"journal\":{\"name\":\"Proceedings of the 1st ACM Workshop on Autonomous and Intelligent Mobile Systems\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-01-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st ACM Workshop on Autonomous and Intelligent Mobile Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3377283.3377285\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st ACM Workshop on Autonomous and Intelligent Mobile Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3377283.3377285","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial Image Detection in Cyber-Physical Systems
In this paper, detection of deception attack on deep neural network (DNN) based image classification in autonomous and cyber-physical systems is considered. Several studies have shown the vulnerability of DNN to malicious deception attack. In such attacks, some or all pixel values of an image are modified by an external attacker, so that the change is almost invisible to human eye but significant enough for a DNN-based classifier to misclassify it. This paper proposes a novel pre-processing technique that facilitates detection of such modified images under any DNN-based image classifier as well as attacker model. The proposed pre-processing algorithm involves a certain combination of principal component analysis (PCA)-based decomposition of the image, and random perturbation based detection to reduce computational complexity. Numerical experiments show that the proposed detection scheme outperforms a competing attack detection algorithm while achieving low false alarm rate and low computational complexity.