Securing Malware Cognitive Systems against Adversarial Attacks

Yuede Ji, Benjamin Bowman, H. H. Huang
{"title":"Securing Malware Cognitive Systems against Adversarial Attacks","authors":"Yuede Ji, Benjamin Bowman, H. H. Huang","doi":"10.1109/ICCC.2019.00014","DOIUrl":null,"url":null,"abstract":"The cognitive systems along with the machine learning techniques have provided significant improvements for many applications. However, recent adversarial attacks, such as data poisoning, evasion attacks, and exploratory attacks, have shown to be able to either cause the machine learning methods to misbehave, or leak sensitive model parameters. In this work, we have devised a prototype of a malware cognitive system, called DeepArmour, which performs robust malware classification against adversarial attacks. At the heart of our method is a voting system with three different machine learning malware classifiers: random forest, multi-layer perceptron, and structure2vec. In addition, DeepArmour applies several adversarial countermeasures, such as feature reconstruction and adversarial retraining to strengthen the robustness. We tested DeepArmour on a malware execution trace dataset, which has 12, 536 malware in five categories. We are able to achieve 0.989 accuracy with 10-fold cross validation. Further, to demonstrate the ability of combating adversarial attacks, we have performed a white-box evasion attack on the dataset and showed how our system is resilient to such attacks. Particularly, DeepArmour is able to achieve 0.675 accuracy for the generated adversarial attacks which are unknown to the model. After retraining with only 10% adversarial samples, DeepArmour is able to achieve 0.839 accuracy","PeriodicalId":262923,"journal":{"name":"2019 IEEE International Conference on Cognitive Computing (ICCC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Cognitive Computing (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC.2019.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

The cognitive systems along with the machine learning techniques have provided significant improvements for many applications. However, recent adversarial attacks, such as data poisoning, evasion attacks, and exploratory attacks, have shown to be able to either cause the machine learning methods to misbehave, or leak sensitive model parameters. In this work, we have devised a prototype of a malware cognitive system, called DeepArmour, which performs robust malware classification against adversarial attacks. At the heart of our method is a voting system with three different machine learning malware classifiers: random forest, multi-layer perceptron, and structure2vec. In addition, DeepArmour applies several adversarial countermeasures, such as feature reconstruction and adversarial retraining to strengthen the robustness. We tested DeepArmour on a malware execution trace dataset, which has 12, 536 malware in five categories. We are able to achieve 0.989 accuracy with 10-fold cross validation. Further, to demonstrate the ability of combating adversarial attacks, we have performed a white-box evasion attack on the dataset and showed how our system is resilient to such attacks. Particularly, DeepArmour is able to achieve 0.675 accuracy for the generated adversarial attacks which are unknown to the model. After retraining with only 10% adversarial samples, DeepArmour is able to achieve 0.839 accuracy
保护恶意软件认知系统免受对抗性攻击
认知系统和机器学习技术为许多应用提供了显著的改进。然而,最近的对抗性攻击,如数据中毒、逃避攻击和探索性攻击,已经被证明能够导致机器学习方法行为不当,或者泄露敏感的模型参数。在这项工作中,我们设计了一个恶意软件认知系统的原型,称为DeepArmour,它可以对对抗性攻击进行强大的恶意软件分类。我们方法的核心是一个带有三种不同机器学习恶意软件分类器的投票系统:随机森林、多层感知器和structure2vec。此外,DeepArmour还采用了一些对抗对策,如特征重构和对抗再训练来增强鲁棒性。我们在恶意软件执行跟踪数据集上对DeepArmour进行了测试,该数据集包含五类12,536个恶意软件。通过10倍交叉验证,我们能够达到0.989的准确度。此外,为了展示对抗对抗性攻击的能力,我们对数据集执行了白盒逃避攻击,并展示了我们的系统如何适应这种攻击。特别是,DeepArmour对于生成的模型未知的对抗性攻击能够达到0.675的准确率。在仅使用10%的对抗样本进行再训练后,DeepArmour的准确率能够达到0.839
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信