No Need to Teach New Tricks to Old Malware: Winning an Evasion Challenge with XOR-based Adversarial Samples

Fabrício Ceschin, Marcus Botacin, Gabriel Lüders, Heitor Murilo Gomes, Luiz Oliveira, A. Grégio
{"title":"No Need to Teach New Tricks to Old Malware: Winning an Evasion Challenge with XOR-based Adversarial Samples","authors":"Fabrício Ceschin, Marcus Botacin, Gabriel Lüders, Heitor Murilo Gomes, Luiz Oliveira, A. Grégio","doi":"10.1145/3433667.3433669","DOIUrl":null,"url":null,"abstract":"Adversarial attacks to Machine Learning (ML) models became such a concern that tech companies (Microsoft and CUJO AI’s Vulnerability Research Lab) decided to launch contests to better understand their impact on practice. During the contest’s first edition (2019), participating teams were challenged to bypass three ML models in a white box manner. Our team bypassed all the three of them and reported interesting insights about models’ weaknesses. In the second edition (2020), the challenge evolved to an attack-and-defense model: the teams should either propose defensive models and attack other teams’ models in a black box manner. Despite the difficulty increase, our team was able to bypass all models again. In this paper, we describe our insights for this year’s contest regarding on attacking models, as well defending them from adversarial attacks. In particular, we show how frequency-based models (e.g., TF-IDF) are vulnerable to the addition of dead function imports, and how models based on raw bytes are vulnerable to payload-embedding obfuscation (e.g., XOR and base64 encoding).","PeriodicalId":379610,"journal":{"name":"Reversing and Offensive-Oriented Trends Symposium","volume":"750 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Reversing and Offensive-Oriented Trends Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3433667.3433669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Adversarial attacks to Machine Learning (ML) models became such a concern that tech companies (Microsoft and CUJO AI’s Vulnerability Research Lab) decided to launch contests to better understand their impact on practice. During the contest’s first edition (2019), participating teams were challenged to bypass three ML models in a white box manner. Our team bypassed all the three of them and reported interesting insights about models’ weaknesses. In the second edition (2020), the challenge evolved to an attack-and-defense model: the teams should either propose defensive models and attack other teams’ models in a black box manner. Despite the difficulty increase, our team was able to bypass all models again. In this paper, we describe our insights for this year’s contest regarding on attacking models, as well defending them from adversarial attacks. In particular, we show how frequency-based models (e.g., TF-IDF) are vulnerable to the addition of dead function imports, and how models based on raw bytes are vulnerable to payload-embedding obfuscation (e.g., XOR and base64 encoding).
无需教旧恶意软件的新技巧:用基于xor的对抗性样本赢得逃避挑战
针对机器学习(ML)模型的对抗性攻击已经成为一个令人担忧的问题,以至于科技公司(微软和CUJO AI的漏洞研究实验室)决定发起竞赛,以更好地了解它们对实践的影响。在第一届(2019年)比赛中,参赛团队面临着以白盒方式绕过三个ML模型的挑战。我们的团队绕过了这三个问题,并报告了关于模型弱点的有趣见解。在第二版(2020年)中,挑战演变为攻防模型:团队要么提出防御模型,要么以黑盒方式攻击其他团队的模型。尽管难度增加了,但我们的团队再次绕过了所有模型。在本文中,我们描述了我们对今年关于攻击模型的竞赛的见解,以及保护它们免受对抗性攻击。特别是,我们展示了基于频率的模型(例如,TF-IDF)如何容易受到添加死函数导入的影响,以及基于原始字节的模型如何容易受到有效负载嵌入混淆(例如,异或和base64编码)的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信