恶意软件样本中诱导假阴性的检测

Adrian Wood, Michael N. Johnstone
{"title":"恶意软件样本中诱导假阴性的检测","authors":"Adrian Wood, Michael N. Johnstone","doi":"10.1109/PST52912.2021.9647787","DOIUrl":null,"url":null,"abstract":"Malware detection is an important area of cyber security. Computer systems rely on malware detection applications to prevent malware attacks from succeeding. Malware detection is not a straightforward task, as new variants of malware are generated at an increasing rate. Machine learning (ML) has been utilised to generate predictive classification models to identify new malware variants which conventional malware detection methods may not detect. Machine learning, has however, been found to be vulnerable to different types of adversarial attacks, in which an attacker is able to negatively affect the classification ability of the ML model. Several defensive measures to prevent adversarial poisoning attacks have been developed, but they often rely on the use of a trusted clean dataset to help identify and remove adversarial examples from the training dataset. The defence in this paper does not require a trusted clean dataset, but instead, identifies intentional false negatives (zero day malware classified as benign) at the testing stage by examining the activation weights of the ML model. The defence was able to identify 94.07% of the successful targeted poisoning attacks.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Detection of Induced False Negatives in Malware Samples\",\"authors\":\"Adrian Wood, Michael N. Johnstone\",\"doi\":\"10.1109/PST52912.2021.9647787\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Malware detection is an important area of cyber security. Computer systems rely on malware detection applications to prevent malware attacks from succeeding. Malware detection is not a straightforward task, as new variants of malware are generated at an increasing rate. Machine learning (ML) has been utilised to generate predictive classification models to identify new malware variants which conventional malware detection methods may not detect. Machine learning, has however, been found to be vulnerable to different types of adversarial attacks, in which an attacker is able to negatively affect the classification ability of the ML model. Several defensive measures to prevent adversarial poisoning attacks have been developed, but they often rely on the use of a trusted clean dataset to help identify and remove adversarial examples from the training dataset. The defence in this paper does not require a trusted clean dataset, but instead, identifies intentional false negatives (zero day malware classified as benign) at the testing stage by examining the activation weights of the ML model. The defence was able to identify 94.07% of the successful targeted poisoning attacks.\",\"PeriodicalId\":144610,\"journal\":{\"name\":\"2021 18th International Conference on Privacy, Security and Trust (PST)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 18th International Conference on Privacy, Security and Trust (PST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PST52912.2021.9647787\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 18th International Conference on Privacy, Security and Trust (PST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PST52912.2021.9647787","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

恶意软件检测是网络安全的一个重要领域。计算机系统依靠恶意软件检测应用程序来阻止恶意软件攻击的成功。恶意软件检测并不是一项简单的任务,因为恶意软件的新变体正在以越来越快的速度生成。机器学习(ML)已被用于生成预测分类模型,以识别传统恶意软件检测方法可能无法检测到的新恶意软件变体。然而,人们发现机器学习容易受到不同类型的对抗性攻击,攻击者能够对机器学习模型的分类能力产生负面影响。已经开发了几种防御措施来防止对抗性中毒攻击,但它们通常依赖于使用可信的干净数据集来帮助识别和从训练数据集中删除对抗性示例。本文中的防御不需要可信的干净数据集,而是通过检查ML模型的激活权重,在测试阶段识别故意的假阴性(零日恶意软件分类为良性)。辩方能够识别94.07%的成功的有针对性的投毒攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Detection of Induced False Negatives in Malware Samples
Malware detection is an important area of cyber security. Computer systems rely on malware detection applications to prevent malware attacks from succeeding. Malware detection is not a straightforward task, as new variants of malware are generated at an increasing rate. Machine learning (ML) has been utilised to generate predictive classification models to identify new malware variants which conventional malware detection methods may not detect. Machine learning, has however, been found to be vulnerable to different types of adversarial attacks, in which an attacker is able to negatively affect the classification ability of the ML model. Several defensive measures to prevent adversarial poisoning attacks have been developed, but they often rely on the use of a trusted clean dataset to help identify and remove adversarial examples from the training dataset. The defence in this paper does not require a trusted clean dataset, but instead, identifies intentional false negatives (zero day malware classified as benign) at the testing stage by examining the activation weights of the ML model. The defence was able to identify 94.07% of the successful targeted poisoning attacks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信