Poster: Adversarial Examples for Classifiers in High-Dimensional Network Data

Muhammad Ejaz Ahmed, Hyoungshick Kim
{"title":"Poster: Adversarial Examples for Classifiers in High-Dimensional Network Data","authors":"Muhammad Ejaz Ahmed, Hyoungshick Kim","doi":"10.1145/3133956.3138853","DOIUrl":null,"url":null,"abstract":"Many machine learning methods make assumptions about the data, such as, data stationarity and data independence, for an efficient learning process that requires less data. However, these assumptions may give rise to vulnerabilities if violated by smart adversaries. In this paper, we propose a novel algorithm to craft the input samples by modifying a certain fraction of input features as small as in order to bypass the decision boundary of widely used binary classifiers using Support Vector Machine (SVM). We show that our algorithm can reliably produce adversarial samples which are misclassified with 98% success rate while modifying 22% of the input features on average. Our goal is to evaluate the robustness of classification algorithms for high demensional network data by intentionally performing evasion attacks with carefully designed adversarial examples. The proposed algorithm is evaluated using real network traffic datasets (CAIDA 2007 and CAIDA 2016).","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3133956.3138853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Many machine learning methods make assumptions about the data, such as, data stationarity and data independence, for an efficient learning process that requires less data. However, these assumptions may give rise to vulnerabilities if violated by smart adversaries. In this paper, we propose a novel algorithm to craft the input samples by modifying a certain fraction of input features as small as in order to bypass the decision boundary of widely used binary classifiers using Support Vector Machine (SVM). We show that our algorithm can reliably produce adversarial samples which are misclassified with 98% success rate while modifying 22% of the input features on average. Our goal is to evaluate the robustness of classification algorithms for high demensional network data by intentionally performing evasion attacks with carefully designed adversarial examples. The proposed algorithm is evaluated using real network traffic datasets (CAIDA 2007 and CAIDA 2016).
海报:高维网络数据中分类器的对抗性示例
许多机器学习方法对数据进行假设,例如数据平稳性和数据独立性,以实现需要较少数据的高效学习过程。然而,如果聪明的对手违反了这些假设,可能会导致漏洞。在本文中,我们提出了一种新的算法,通过修改一定比例的输入特征,使其小于,以绕过广泛使用的支持向量机(SVM)二分类器的决策边界。我们表明,我们的算法可以可靠地产生对抗性样本,其错误分类的成功率为98%,而平均修改22%的输入特征。我们的目标是通过精心设计的对抗性示例故意执行逃避攻击来评估高维网络数据分类算法的鲁棒性。使用真实网络流量数据集(CAIDA 2007和CAIDA 2016)对所提出的算法进行了评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信