恶意软件检测分类器是否容易受到基于演员评论家的逃避攻击?

IF 1.1 Q4 COMPUTER SCIENCE, INFORMATION SYSTEMS
Hemant Rathore, Sujay C Sharma, S. Sahay, Mohit Sewak
{"title":"恶意软件检测分类器是否容易受到基于演员评论家的逃避攻击?","authors":"Hemant Rathore, Sujay C Sharma, S. Sahay, Mohit Sewak","doi":"10.4108/eai.31-5-2022.174087","DOIUrl":null,"url":null,"abstract":"Android devices like smartphones and tablets have become immensely popular and are an integral part of our daily lives. However, it has also attracted malware developers to design android malware which have grown aggressively in the last few years. Research shows that machine learning, ensemble, and deep learning models can successfully be used to detect android malware. However, the robustness of these models against well-crafted adversarial samples is not well investigated. Therefore, we first stepped into the adversaries’ shoes and proposed the ACE attack that adds limited perturbations in malicious applications such that they are forcefully misclassified as benign and remain undetected by di ff erent malware detection models. The ACE agent is designed based on an actor-critic architecture that uses reinforcement learning to add perturbations (maximum ten) while maintaining the structural and functional integrity of the adversarial malicious applications. The proposed attack is validated against twenty-two di ff erent malware detection models based on two feature sets and eleven di ff erent classification algorithms. The ACE attack accomplished an average fooling rate (with maximum of ten perturbations) of 46 . 63% across eleven permission based malware detection models and 95 . 31% across eleven intent based detection models. The attack forced a massive number of misclassifications that led to an average accuracy drop of 18 . 07% and 36 . 62% in the above permission and intent based malware detection models. Later we also design a defense mechanism using the adversarial retraining strategy, which uses adversarial malware samples with correct class labels to retrain the models. The defense mechanism improves the average accuracy by 24 . 88% and 76 . 51% for the eleven permission and eleven intent based malware detection models. In conclusion, we found that malware detection models based on machine learning, ensemble, and deep learning perform poorly against adversarial samples. Thus malware detection models should be investigated for vulnerabilities and mitigated to enhance their overall forensic knowledge and adversarial robustness.","PeriodicalId":43034,"journal":{"name":"EAI Endorsed Transactions on Scalable Information Systems","volume":"183 1","pages":"e6"},"PeriodicalIF":1.1000,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Are Malware Detection Classifiers Adversarially Vulnerable to Actor-Critic based Evasion Attacks?\",\"authors\":\"Hemant Rathore, Sujay C Sharma, S. Sahay, Mohit Sewak\",\"doi\":\"10.4108/eai.31-5-2022.174087\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Android devices like smartphones and tablets have become immensely popular and are an integral part of our daily lives. However, it has also attracted malware developers to design android malware which have grown aggressively in the last few years. Research shows that machine learning, ensemble, and deep learning models can successfully be used to detect android malware. However, the robustness of these models against well-crafted adversarial samples is not well investigated. Therefore, we first stepped into the adversaries’ shoes and proposed the ACE attack that adds limited perturbations in malicious applications such that they are forcefully misclassified as benign and remain undetected by di ff erent malware detection models. The ACE agent is designed based on an actor-critic architecture that uses reinforcement learning to add perturbations (maximum ten) while maintaining the structural and functional integrity of the adversarial malicious applications. The proposed attack is validated against twenty-two di ff erent malware detection models based on two feature sets and eleven di ff erent classification algorithms. The ACE attack accomplished an average fooling rate (with maximum of ten perturbations) of 46 . 63% across eleven permission based malware detection models and 95 . 31% across eleven intent based detection models. The attack forced a massive number of misclassifications that led to an average accuracy drop of 18 . 07% and 36 . 62% in the above permission and intent based malware detection models. Later we also design a defense mechanism using the adversarial retraining strategy, which uses adversarial malware samples with correct class labels to retrain the models. The defense mechanism improves the average accuracy by 24 . 88% and 76 . 51% for the eleven permission and eleven intent based malware detection models. In conclusion, we found that malware detection models based on machine learning, ensemble, and deep learning perform poorly against adversarial samples. Thus malware detection models should be investigated for vulnerabilities and mitigated to enhance their overall forensic knowledge and adversarial robustness.\",\"PeriodicalId\":43034,\"journal\":{\"name\":\"EAI Endorsed Transactions on Scalable Information Systems\",\"volume\":\"183 1\",\"pages\":\"e6\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2022-05-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EAI Endorsed Transactions on Scalable Information Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4108/eai.31-5-2022.174087\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EAI Endorsed Transactions on Scalable Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4108/eai.31-5-2022.174087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 1

摘要

像智能手机和平板电脑这样的安卓设备已经变得非常流行,成为我们日常生活中不可或缺的一部分。然而,它也吸引了恶意软件开发者来设计安卓恶意软件,这些恶意软件在过去几年里迅速增长。研究表明,机器学习、集成和深度学习模型可以成功地用于检测android恶意软件。然而,这些模型对精心制作的对抗性样本的鲁棒性并没有得到很好的研究。因此,我们首先站在对手的立场上,提出了ACE攻击,该攻击在恶意应用程序中添加了有限的扰动,这样它们就会被强行归类为良性,并且不会被不同的恶意软件检测模型检测到。ACE代理是基于actor-critic架构设计的,该架构使用强化学习来添加扰动(最多10个),同时保持对抗性恶意应用程序的结构和功能完整性。基于两个特征集和11种不同的分类算法,对22种不同的恶意软件检测模型进行了验证。ACE攻击的平均愚弄率(最多10次干扰)为46。在11个基于权限的恶意软件检测模型和95。在11个基于意图的检测模型中占31%。这次攻击导致了大量的错误分类,导致平均准确率下降了18%。07%和36。62%以上基于权限和意图的恶意软件检测模型。随后,我们还设计了一种使用对抗性再训练策略的防御机制,该策略使用具有正确类标签的对抗性恶意软件样本来重新训练模型。防御机制将平均精度提高24。88%和76%。51%的基于11个权限和11个意图的恶意软件检测模型。总之,我们发现基于机器学习、集成和深度学习的恶意软件检测模型在对抗样本时表现不佳。因此,应该调查恶意软件检测模型的漏洞并减轻其影响,以增强其整体取证知识和对抗鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Are Malware Detection Classifiers Adversarially Vulnerable to Actor-Critic based Evasion Attacks?
Android devices like smartphones and tablets have become immensely popular and are an integral part of our daily lives. However, it has also attracted malware developers to design android malware which have grown aggressively in the last few years. Research shows that machine learning, ensemble, and deep learning models can successfully be used to detect android malware. However, the robustness of these models against well-crafted adversarial samples is not well investigated. Therefore, we first stepped into the adversaries’ shoes and proposed the ACE attack that adds limited perturbations in malicious applications such that they are forcefully misclassified as benign and remain undetected by di ff erent malware detection models. The ACE agent is designed based on an actor-critic architecture that uses reinforcement learning to add perturbations (maximum ten) while maintaining the structural and functional integrity of the adversarial malicious applications. The proposed attack is validated against twenty-two di ff erent malware detection models based on two feature sets and eleven di ff erent classification algorithms. The ACE attack accomplished an average fooling rate (with maximum of ten perturbations) of 46 . 63% across eleven permission based malware detection models and 95 . 31% across eleven intent based detection models. The attack forced a massive number of misclassifications that led to an average accuracy drop of 18 . 07% and 36 . 62% in the above permission and intent based malware detection models. Later we also design a defense mechanism using the adversarial retraining strategy, which uses adversarial malware samples with correct class labels to retrain the models. The defense mechanism improves the average accuracy by 24 . 88% and 76 . 51% for the eleven permission and eleven intent based malware detection models. In conclusion, we found that malware detection models based on machine learning, ensemble, and deep learning perform poorly against adversarial samples. Thus malware detection models should be investigated for vulnerabilities and mitigated to enhance their overall forensic knowledge and adversarial robustness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
EAI Endorsed Transactions on Scalable Information Systems
EAI Endorsed Transactions on Scalable Information Systems COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
2.80
自引率
15.40%
发文量
49
审稿时长
10 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信