基于深度学习的恶意软件检测器多策略对抗攻击方法

Wang Yang, Fan Yin
{"title":"基于深度学习的恶意软件检测器多策略对抗攻击方法","authors":"Wang Yang, Fan Yin","doi":"10.1109/CSP58884.2023.00018","DOIUrl":null,"url":null,"abstract":"Deep learning allows building high-accuracy malware detectors without complicated feature engineering. However, research shows that the deep learning model is vulnerable and can be deceived if attackers add perturbation to input samples to craft adversarial examples deliberately. By altering the pixel values of the images, attackers have been able to generate adversarial examples that can fool state-of-the-art deep learning based image classifiers. However, Windows malware is a structured binary program file. Therefore, arbitrarily altering its contents will often break the program's functionality. In order to solve this problem, a standard but inefficient method is to run the sample in the sandbox to verify whether its functionality is preserved. This paper proposes a multi-strategy adversarial attack method, which can generate malware adversarial examples with functionality-preserving. Our method manipulates the redundant or extended space in the Windows malware binary, so it will not break functionality. Experiments show that our method has a high attack success rate and efficiency.","PeriodicalId":255083,"journal":{"name":"2023 7th International Conference on Cryptography, Security and Privacy (CSP)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Multi-Strategy Adversarial Attack Method for Deep Learning Based Malware Detectors\",\"authors\":\"Wang Yang, Fan Yin\",\"doi\":\"10.1109/CSP58884.2023.00018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning allows building high-accuracy malware detectors without complicated feature engineering. However, research shows that the deep learning model is vulnerable and can be deceived if attackers add perturbation to input samples to craft adversarial examples deliberately. By altering the pixel values of the images, attackers have been able to generate adversarial examples that can fool state-of-the-art deep learning based image classifiers. However, Windows malware is a structured binary program file. Therefore, arbitrarily altering its contents will often break the program's functionality. In order to solve this problem, a standard but inefficient method is to run the sample in the sandbox to verify whether its functionality is preserved. This paper proposes a multi-strategy adversarial attack method, which can generate malware adversarial examples with functionality-preserving. Our method manipulates the redundant or extended space in the Windows malware binary, so it will not break functionality. Experiments show that our method has a high attack success rate and efficiency.\",\"PeriodicalId\":255083,\"journal\":{\"name\":\"2023 7th International Conference on Cryptography, Security and Privacy (CSP)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 7th International Conference on Cryptography, Security and Privacy (CSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSP58884.2023.00018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 7th International Conference on Cryptography, Security and Privacy (CSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSP58884.2023.00018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习允许构建高精度的恶意软件检测器,而无需复杂的特征工程。然而,研究表明,深度学习模型是脆弱的,如果攻击者故意在输入样本中添加扰动来制作对抗性示例,则可能会被欺骗。通过改变图像的像素值,攻击者已经能够生成对抗性示例,可以欺骗最先进的基于深度学习的图像分类器。然而,Windows恶意软件是一个结构化的二进制程序文件。因此,任意更改其内容通常会破坏程序的功能。为了解决这个问题,一个标准但效率低下的方法是在沙箱中运行样本,以验证其功能是否保留。本文提出了一种多策略对抗攻击方法,该方法可以生成功能保持的恶意软件对抗示例。我们的方法操作Windows恶意软件二进制文件中的冗余或扩展空间,因此它不会破坏功能。实验表明,该方法具有较高的攻击成功率和效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Multi-Strategy Adversarial Attack Method for Deep Learning Based Malware Detectors
Deep learning allows building high-accuracy malware detectors without complicated feature engineering. However, research shows that the deep learning model is vulnerable and can be deceived if attackers add perturbation to input samples to craft adversarial examples deliberately. By altering the pixel values of the images, attackers have been able to generate adversarial examples that can fool state-of-the-art deep learning based image classifiers. However, Windows malware is a structured binary program file. Therefore, arbitrarily altering its contents will often break the program's functionality. In order to solve this problem, a standard but inefficient method is to run the sample in the sandbox to verify whether its functionality is preserved. This paper proposes a multi-strategy adversarial attack method, which can generate malware adversarial examples with functionality-preserving. Our method manipulates the redundant or extended space in the Windows malware binary, so it will not break functionality. Experiments show that our method has a high attack success rate and efficiency.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信