鲁棒恶意软件检测挑战与贪婪随机加速多比特搜索

S. Verwer, A. Nadeem, Christian A. Hammerschmidt, Laurens Bliek, Abdullah Al-Dujaili, Una-May O’Reilly
{"title":"鲁棒恶意软件检测挑战与贪婪随机加速多比特搜索","authors":"S. Verwer, A. Nadeem, Christian A. Hammerschmidt, Laurens Bliek, Abdullah Al-Dujaili, Una-May O’Reilly","doi":"10.1145/3411508.3421374","DOIUrl":null,"url":null,"abstract":"Training classifiers that are robust against adversarially modified examples is becoming increasingly important in practice. In the field of malware detection, adversaries modify malicious binary files to seem benign while preserving their malicious behavior. We report on the results of a recently held robust malware detection challenge. There were two tracks in which teams could participate: the attack track asked for adversarially modified malware samples and the defend track asked for trained neural network classifiers that are robust to such modifications. The teams were unaware of the attacks/defenses they had to detect/evade. Although only 9 teams participated, this unique setting allowed us to make several interesting observations. We also present the challenge winner: GRAMS, a family of novel techniques to train adversarially robust networks that preserve the intended (malicious) functionality and yield high-quality adversarial samples. These samples are used to iteratively train a robust classifier. We show that our techniques, based on discrete optimization techniques, beat purely gradient-based methods. GRAMS obtained first place in both the attack and defend tracks of the competition.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"247 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search\",\"authors\":\"S. Verwer, A. Nadeem, Christian A. Hammerschmidt, Laurens Bliek, Abdullah Al-Dujaili, Una-May O’Reilly\",\"doi\":\"10.1145/3411508.3421374\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training classifiers that are robust against adversarially modified examples is becoming increasingly important in practice. In the field of malware detection, adversaries modify malicious binary files to seem benign while preserving their malicious behavior. We report on the results of a recently held robust malware detection challenge. There were two tracks in which teams could participate: the attack track asked for adversarially modified malware samples and the defend track asked for trained neural network classifiers that are robust to such modifications. The teams were unaware of the attacks/defenses they had to detect/evade. Although only 9 teams participated, this unique setting allowed us to make several interesting observations. We also present the challenge winner: GRAMS, a family of novel techniques to train adversarially robust networks that preserve the intended (malicious) functionality and yield high-quality adversarial samples. These samples are used to iteratively train a robust classifier. We show that our techniques, based on discrete optimization techniques, beat purely gradient-based methods. GRAMS obtained first place in both the attack and defend tracks of the competition.\",\"PeriodicalId\":132987,\"journal\":{\"name\":\"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security\",\"volume\":\"247 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3411508.3421374\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3411508.3421374","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

训练对对抗性修改样本具有鲁棒性的分类器在实践中变得越来越重要。在恶意软件检测领域,攻击者在保留其恶意行为的同时修改恶意二进制文件,使其看起来是良性的。我们报告了最近举行的强大恶意软件检测挑战的结果。团队可以参与两个轨道:攻击轨道要求对抗性修改的恶意软件样本,防御轨道要求训练有素的神经网络分类器对此类修改具有鲁棒性。球队没有意识到他们必须探测/躲避的攻击/防御。虽然只有9个团队参与,但这种独特的环境使我们能够进行一些有趣的观察。我们还提出了挑战的获胜者:GRAMS,这是一系列用于训练对抗鲁棒网络的新技术,这些网络可以保留预期的(恶意的)功能并产生高质量的对抗样本。这些样本被用来迭代训练一个鲁棒分类器。我们表明,我们的技术,基于离散优化技术,击败了纯粹的基于梯度的方法。GRAMS在进攻和防守两项比赛中都获得了第一名。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search
Training classifiers that are robust against adversarially modified examples is becoming increasingly important in practice. In the field of malware detection, adversaries modify malicious binary files to seem benign while preserving their malicious behavior. We report on the results of a recently held robust malware detection challenge. There were two tracks in which teams could participate: the attack track asked for adversarially modified malware samples and the defend track asked for trained neural network classifiers that are robust to such modifications. The teams were unaware of the attacks/defenses they had to detect/evade. Although only 9 teams participated, this unique setting allowed us to make several interesting observations. We also present the challenge winner: GRAMS, a family of novel techniques to train adversarially robust networks that preserve the intended (malicious) functionality and yield high-quality adversarial samples. These samples are used to iteratively train a robust classifier. We show that our techniques, based on discrete optimization techniques, beat purely gradient-based methods. GRAMS obtained first place in both the attack and defend tracks of the competition.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信