Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security最新文献

筛选
英文 中文
Automatic Yara Rule Generation Using Biclustering 使用双聚类自动生成雅拉规则
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-09-06 DOI: 10.1145/3411508.3421372
Edward Raff, Richard Zak, Gary Lopez Munoz, William Fleming, H. Anderson, Bobby Filar, Charles K. Nicholas, James Holt
{"title":"Automatic Yara Rule Generation Using Biclustering","authors":"Edward Raff, Richard Zak, Gary Lopez Munoz, William Fleming, H. Anderson, Bobby Filar, Charles K. Nicholas, James Holt","doi":"10.1145/3411508.3421372","DOIUrl":"https://doi.org/10.1145/3411508.3421372","url":null,"abstract":"Yara rules are a ubiquitous tool among cybersecurity practitioners and analysts. Developing high-quality Yara rules to detect a malware family of interest can be labor- and time-intensive, even for expert users. Few tools exist and relatively little work has been done on how to automate the generation of Yara rules for specific families. In this paper, we leverage large n-grams (n ≥ 8) combined with a new biclustering algorithm to construct simple Yara rules more effectively than currently available software. Our method, AutoYara, is fast, allowing for deployment on low-resource equipment for teams that deploy to remote networks. Our results demonstrate that AutoYara can help reduce analyst workload by producing rules with useful true-positive rates while maintaining low false-positive rates, sometimes matching or even outperforming human analysts.In addition, real-world testing by malware analysts indicates AutoYara could reduce analyst time spent constructing Yara rules by 44-86%, allowing them to spend their time on the more advanced malware that current tools can't handle. Code will be made available at https://github.com/NeuromorphicComputationResearchProgram.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129791508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Flow-based Detection and Proxy-based Evasion of Encrypted Malware C2 Traffic 基于流量检测和代理规避加密恶意软件C2流量
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-09-02 DOI: 10.1145/3411508.3421379
Carlos Novo, Ricardo Morla
{"title":"Flow-based Detection and Proxy-based Evasion of Encrypted Malware C2 Traffic","authors":"Carlos Novo, Ricardo Morla","doi":"10.1145/3411508.3421379","DOIUrl":"https://doi.org/10.1145/3411508.3421379","url":null,"abstract":"State of the art deep learning techniques are known to be vulnerable to evasion attacks where an adversarial sample is generated from a malign sample and misclassified as benign. Detection of encrypted malware command and control traffic based on TCP/IP flow features can be framed as a learning task and is thus vulnerable to evasion attacks. However, unlike e.g. in image processing where generated adversarial samples can be directly mapped to images, going from flow features to actual TCP/IP packets requires crafting the sequence of packets, with no established approach for such crafting and a limitation on the set of modifiable features that such crafting allows.In this paper we discuss learning and evasion consequences of the gap between generated and crafted adversarial samples. We exemplify with a deep neural network detector trained on a public C2 traffic dataset, white-box adversarial learning, and a proxy-based approach for crafting longer flows. Our results show 1) the high evasion rate obtained by using generated adversarial samples on the detector can be significantly reduced when using crafted adversarial samples; 2) robustness against adversarial samples by model hardening varies according to the crafting approach and corresponding set of modifiable features that the attack allows for; 3) incrementally training hardened models with adversarial samples can produce a level playing field where no detector is best against all attacks and no attack is best against all detectors, in a given set of attacks and detectors. To the best of our knowledge this is the first time that level playing field feature set- and iteration-hardening are analyzed in encrypted C2 malware traffic detection.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126192745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search 鲁棒恶意软件检测挑战与贪婪随机加速多比特搜索
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-08-24 DOI: 10.1145/3411508.3421374
S. Verwer, A. Nadeem, Christian A. Hammerschmidt, Laurens Bliek, Abdullah Al-Dujaili, Una-May O’Reilly
{"title":"The Robust Malware Detection Challenge and Greedy Random Accelerated Multi-Bit Search","authors":"S. Verwer, A. Nadeem, Christian A. Hammerschmidt, Laurens Bliek, Abdullah Al-Dujaili, Una-May O’Reilly","doi":"10.1145/3411508.3421374","DOIUrl":"https://doi.org/10.1145/3411508.3421374","url":null,"abstract":"Training classifiers that are robust against adversarially modified examples is becoming increasingly important in practice. In the field of malware detection, adversaries modify malicious binary files to seem benign while preserving their malicious behavior. We report on the results of a recently held robust malware detection challenge. There were two tracks in which teams could participate: the attack track asked for adversarially modified malware samples and the defend track asked for trained neural network classifiers that are robust to such modifications. The teams were unaware of the attacks/defenses they had to detect/evade. Although only 9 teams participated, this unique setting allowed us to make several interesting observations. We also present the challenge winner: GRAMS, a family of novel techniques to train adversarially robust networks that preserve the intended (malicious) functionality and yield high-quality adversarial samples. These samples are used to iteratively train a robust classifier. We show that our techniques, based on discrete optimization techniques, beat purely gradient-based methods. GRAMS obtained first place in both the attack and defend tracks of the competition.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115951242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards Certifiable Adversarial Sample Detection 面向可认证的对抗性样本检测
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-02-20 DOI: 10.1145/3411508.3421381
Ilia Shumailov, Yiren Zhao, R. Mullins, Ross Anderson
{"title":"Towards Certifiable Adversarial Sample Detection","authors":"Ilia Shumailov, Yiren Zhao, R. Mullins, Ross Anderson","doi":"10.1145/3411508.3421381","DOIUrl":"https://doi.org/10.1145/3411508.3421381","url":null,"abstract":"Convolutional Neural Networks (CNNs) are deployed in more and more classification systems, but adversarial samples can be maliciously crafted to trick them, and are becoming a real threat. There have been various proposals to improve CNNs' adversarial robustness but these all suffer performance penalties or have other limitations. In this paper, we offer a new approach in the form of a certifiable adversarial detection scheme, the Certifiable Taboo Trap (CTT). This system, in theory, can provide certifiable guarantees of detectability of a range of adversarial inputs for certain l-∞ sizes. We develop and evaluate several versions of CTT with different defense capabilities, training overheads and certifiability on adversarial samples. In practice, against adversaries with various l-p norms, CTT outperforms existing defense methods that focus purely on improving network robustness. We show that CTT has small false positive rates on clean test data, minimal compute overheads when deployed, and can support complex security policies.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125830021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信