{"title":"Securing Malware Cognitive Systems against Adversarial Attacks","authors":"Yuede Ji, Benjamin Bowman, H. H. Huang","doi":"10.1109/ICCC.2019.00014","DOIUrl":null,"url":null,"abstract":"The cognitive systems along with the machine learning techniques have provided significant improvements for many applications. However, recent adversarial attacks, such as data poisoning, evasion attacks, and exploratory attacks, have shown to be able to either cause the machine learning methods to misbehave, or leak sensitive model parameters. In this work, we have devised a prototype of a malware cognitive system, called DeepArmour, which performs robust malware classification against adversarial attacks. At the heart of our method is a voting system with three different machine learning malware classifiers: random forest, multi-layer perceptron, and structure2vec. In addition, DeepArmour applies several adversarial countermeasures, such as feature reconstruction and adversarial retraining to strengthen the robustness. We tested DeepArmour on a malware execution trace dataset, which has 12, 536 malware in five categories. We are able to achieve 0.989 accuracy with 10-fold cross validation. Further, to demonstrate the ability of combating adversarial attacks, we have performed a white-box evasion attack on the dataset and showed how our system is resilient to such attacks. Particularly, DeepArmour is able to achieve 0.675 accuracy for the generated adversarial attacks which are unknown to the model. After retraining with only 10% adversarial samples, DeepArmour is able to achieve 0.839 accuracy","PeriodicalId":262923,"journal":{"name":"2019 IEEE International Conference on Cognitive Computing (ICCC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Cognitive Computing (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC.2019.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
The cognitive systems along with the machine learning techniques have provided significant improvements for many applications. However, recent adversarial attacks, such as data poisoning, evasion attacks, and exploratory attacks, have shown to be able to either cause the machine learning methods to misbehave, or leak sensitive model parameters. In this work, we have devised a prototype of a malware cognitive system, called DeepArmour, which performs robust malware classification against adversarial attacks. At the heart of our method is a voting system with three different machine learning malware classifiers: random forest, multi-layer perceptron, and structure2vec. In addition, DeepArmour applies several adversarial countermeasures, such as feature reconstruction and adversarial retraining to strengthen the robustness. We tested DeepArmour on a malware execution trace dataset, which has 12, 536 malware in five categories. We are able to achieve 0.989 accuracy with 10-fold cross validation. Further, to demonstrate the ability of combating adversarial attacks, we have performed a white-box evasion attack on the dataset and showed how our system is resilient to such attacks. Particularly, DeepArmour is able to achieve 0.675 accuracy for the generated adversarial attacks which are unknown to the model. After retraining with only 10% adversarial samples, DeepArmour is able to achieve 0.839 accuracy