{"title":"A Statistical Defense Approach for Detecting Adversarial Examples","authors":"Alessandro Cennamo, Ido Freeman, A. Kummert","doi":"10.1145/3415048.3416103","DOIUrl":null,"url":null,"abstract":"Adversarial examples are maliciously modified inputs created to fool Machine Learning algorithms (ML). The existence of such inputs presents a major issue to the expansion of ML-based solutions. Many researchers have already contributed to the topic, providing both cutting edge-attack techniques and various defense strategies. This work focuses on the development of a system capable of detecting adversarial samples by exploiting statistical information from the training-set. Our detector computes several distorted replicas of the test input, then collects the classifier's prediction vectors to build a meaningful signature for the detection task. Then, the signature is projected onto a class-specific statistic vector to infer the input's nature. The class predicted for the original input is used to select the class-statistic vector. We show that our method reliably detects malicious inputs, outperforming state-of-the-art approaches in various settings, while being complementary to other defense solutions.","PeriodicalId":122511,"journal":{"name":"Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 International Conference on Pattern Recognition and Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3415048.3416103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Adversarial examples are maliciously modified inputs created to fool Machine Learning algorithms (ML). The existence of such inputs presents a major issue to the expansion of ML-based solutions. Many researchers have already contributed to the topic, providing both cutting edge-attack techniques and various defense strategies. This work focuses on the development of a system capable of detecting adversarial samples by exploiting statistical information from the training-set. Our detector computes several distorted replicas of the test input, then collects the classifier's prediction vectors to build a meaningful signature for the detection task. Then, the signature is projected onto a class-specific statistic vector to infer the input's nature. The class predicted for the original input is used to select the class-statistic vector. We show that our method reliably detects malicious inputs, outperforming state-of-the-art approaches in various settings, while being complementary to other defense solutions.