{"title":"海报:高维网络数据中分类器的对抗性示例","authors":"Muhammad Ejaz Ahmed, Hyoungshick Kim","doi":"10.1145/3133956.3138853","DOIUrl":null,"url":null,"abstract":"Many machine learning methods make assumptions about the data, such as, data stationarity and data independence, for an efficient learning process that requires less data. However, these assumptions may give rise to vulnerabilities if violated by smart adversaries. In this paper, we propose a novel algorithm to craft the input samples by modifying a certain fraction of input features as small as in order to bypass the decision boundary of widely used binary classifiers using Support Vector Machine (SVM). We show that our algorithm can reliably produce adversarial samples which are misclassified with 98% success rate while modifying 22% of the input features on average. Our goal is to evaluate the robustness of classification algorithms for high demensional network data by intentionally performing evasion attacks with carefully designed adversarial examples. The proposed algorithm is evaluated using real network traffic datasets (CAIDA 2007 and CAIDA 2016).","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Poster: Adversarial Examples for Classifiers in High-Dimensional Network Data\",\"authors\":\"Muhammad Ejaz Ahmed, Hyoungshick Kim\",\"doi\":\"10.1145/3133956.3138853\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many machine learning methods make assumptions about the data, such as, data stationarity and data independence, for an efficient learning process that requires less data. However, these assumptions may give rise to vulnerabilities if violated by smart adversaries. In this paper, we propose a novel algorithm to craft the input samples by modifying a certain fraction of input features as small as in order to bypass the decision boundary of widely used binary classifiers using Support Vector Machine (SVM). We show that our algorithm can reliably produce adversarial samples which are misclassified with 98% success rate while modifying 22% of the input features on average. Our goal is to evaluate the robustness of classification algorithms for high demensional network data by intentionally performing evasion attacks with carefully designed adversarial examples. The proposed algorithm is evaluated using real network traffic datasets (CAIDA 2007 and CAIDA 2016).\",\"PeriodicalId\":191367,\"journal\":{\"name\":\"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3133956.3138853\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3133956.3138853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Poster: Adversarial Examples for Classifiers in High-Dimensional Network Data
Many machine learning methods make assumptions about the data, such as, data stationarity and data independence, for an efficient learning process that requires less data. However, these assumptions may give rise to vulnerabilities if violated by smart adversaries. In this paper, we propose a novel algorithm to craft the input samples by modifying a certain fraction of input features as small as in order to bypass the decision boundary of widely used binary classifiers using Support Vector Machine (SVM). We show that our algorithm can reliably produce adversarial samples which are misclassified with 98% success rate while modifying 22% of the input features on average. Our goal is to evaluate the robustness of classification algorithms for high demensional network data by intentionally performing evasion attacks with carefully designed adversarial examples. The proposed algorithm is evaluated using real network traffic datasets (CAIDA 2007 and CAIDA 2016).