{"title":"Assessing transferability of adversarial examples against malware detection classifiers","authors":"Yixiang Wang, Jiqiang Liu, Xiaolin Chang","doi":"10.1145/3310273.3323072","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) algorithms provide better performance than traditional algorithms in various applications. However, some unknown flaws in ML classifiers make them sensitive to adversarial examples generated by adding small but fooled purposeful distortions to natural examples. This paper aims to investigate the transferability of adversarial examples generated on a sparse and structured dataset and the ability of adversarial training in resisting adversarial examples. The results demonstrate that adversarial examples generated by DNN can fool a set of ML classifiers such as decision tree, random forest, SVM, CNN and RNN. Also, adversarial training can improve the robustness of DNN in terms of resisting attacks.","PeriodicalId":431860,"journal":{"name":"Proceedings of the 16th ACM International Conference on Computing Frontiers","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th ACM International Conference on Computing Frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3310273.3323072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Machine learning (ML) algorithms provide better performance than traditional algorithms in various applications. However, some unknown flaws in ML classifiers make them sensitive to adversarial examples generated by adding small but fooled purposeful distortions to natural examples. This paper aims to investigate the transferability of adversarial examples generated on a sparse and structured dataset and the ability of adversarial training in resisting adversarial examples. The results demonstrate that adversarial examples generated by DNN can fool a set of ML classifiers such as decision tree, random forest, SVM, CNN and RNN. Also, adversarial training can improve the robustness of DNN in terms of resisting attacks.