浅安全:关于创建对抗性变体以逃避基于机器学习的恶意软件检测器

Fabrício Ceschin, Marcus Botacin, Heitor Murilo Gomes, Luiz Oliveira, A. Grégio
{"title":"浅安全:关于创建对抗性变体以逃避基于机器学习的恶意软件检测器","authors":"Fabrício Ceschin, Marcus Botacin, Heitor Murilo Gomes, Luiz Oliveira, A. Grégio","doi":"10.1145/3375894.3375898","DOIUrl":null,"url":null,"abstract":"The use of Machine Learning (ML) techniques for malware detection has been a trend in the last two decades. More recently, researchers started to investigate adversarial approaches to bypass these ML-based malware detectors. Adversarial attacks became so popular that a large Internet company has launched a public challenge to encourage researchers to bypass their (three) ML-based static malware detectors. Our research group teamed to participate in this challenge in August/2019, accomplishing the bypass of all 150 tests proposed by the company. To do so, we implemented an automatic exploitation method which moves the original malware binary sections to resources and includes new chunks of data to it to create adversarial samples that not only bypassed their ML detectors, but also real AV engines as well (with a lower detection rate than the original samples). In this paper, we detail our methodological approach to overcome the challenge and report our findings. With these results, we expect to contribute with the community and provide better understanding on ML-based detectors weaknesses. We also pinpoint future research directions toward the development of more robust malware detectors against adversarial machine learning.","PeriodicalId":288970,"journal":{"name":"Proceedings of the 3rd Reversing and Offensive-oriented Trends Symposium","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":"{\"title\":\"Shallow Security: on the Creation of Adversarial Variants to Evade Machine Learning-Based Malware Detectors\",\"authors\":\"Fabrício Ceschin, Marcus Botacin, Heitor Murilo Gomes, Luiz Oliveira, A. Grégio\",\"doi\":\"10.1145/3375894.3375898\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of Machine Learning (ML) techniques for malware detection has been a trend in the last two decades. More recently, researchers started to investigate adversarial approaches to bypass these ML-based malware detectors. Adversarial attacks became so popular that a large Internet company has launched a public challenge to encourage researchers to bypass their (three) ML-based static malware detectors. Our research group teamed to participate in this challenge in August/2019, accomplishing the bypass of all 150 tests proposed by the company. To do so, we implemented an automatic exploitation method which moves the original malware binary sections to resources and includes new chunks of data to it to create adversarial samples that not only bypassed their ML detectors, but also real AV engines as well (with a lower detection rate than the original samples). In this paper, we detail our methodological approach to overcome the challenge and report our findings. With these results, we expect to contribute with the community and provide better understanding on ML-based detectors weaknesses. We also pinpoint future research directions toward the development of more robust malware detectors against adversarial machine learning.\",\"PeriodicalId\":288970,\"journal\":{\"name\":\"Proceedings of the 3rd Reversing and Offensive-oriented Trends Symposium\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"32\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd Reversing and Offensive-oriented Trends Symposium\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3375894.3375898\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd Reversing and Offensive-oriented Trends Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375894.3375898","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 32

摘要

在过去的二十年中,使用机器学习(ML)技术进行恶意软件检测已经成为一种趋势。最近,研究人员开始研究绕过这些基于ml的恶意软件检测器的对抗性方法。对抗性攻击变得如此流行,以至于一家大型互联网公司发起了一项公开挑战,鼓励研究人员绕过他们的(三个)基于ml的静态恶意软件检测器。我们的研究小组于2019年8月组队参加了这项挑战,完成了公司提出的所有150项测试的绕过。为此,我们实现了一种自动利用方法,该方法将原始恶意软件二进制部分移动到资源中,并将新的数据块包含到其中,以创建对抗性样本,这些样本不仅绕过了机器学习检测器,而且还绕过了真正的AV引擎(检测率低于原始样本)。在本文中,我们详细介绍了我们克服挑战的方法方法,并报告了我们的发现。通过这些结果,我们希望为社区做出贡献,并更好地理解基于ml的检测器的弱点。我们还指出了未来的研究方向,即开发针对对抗性机器学习的更强大的恶意软件检测器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Shallow Security: on the Creation of Adversarial Variants to Evade Machine Learning-Based Malware Detectors
The use of Machine Learning (ML) techniques for malware detection has been a trend in the last two decades. More recently, researchers started to investigate adversarial approaches to bypass these ML-based malware detectors. Adversarial attacks became so popular that a large Internet company has launched a public challenge to encourage researchers to bypass their (three) ML-based static malware detectors. Our research group teamed to participate in this challenge in August/2019, accomplishing the bypass of all 150 tests proposed by the company. To do so, we implemented an automatic exploitation method which moves the original malware binary sections to resources and includes new chunks of data to it to create adversarial samples that not only bypassed their ML detectors, but also real AV engines as well (with a lower detection rate than the original samples). In this paper, we detail our methodological approach to overcome the challenge and report our findings. With these results, we expect to contribute with the community and provide better understanding on ML-based detectors weaknesses. We also pinpoint future research directions toward the development of more robust malware detectors against adversarial machine learning.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信