Can We Predict Consequences of Cyber Attacks?

Prerit Datta, A. Namin, Keith S. Jones
{"title":"Can We Predict Consequences of Cyber Attacks?","authors":"Prerit Datta, A. Namin, Keith S. Jones","doi":"10.1109/ICMLA55696.2022.00174","DOIUrl":null,"url":null,"abstract":"Threat modeling is a process by which security designers and researchers analyze the security of a system against known threats and vulnerabilities. There is a myriad of threat intelligence and vulnerability databases that security experts use to make important day-to-day decisions. Security experts and incident responders require the right set of skills and tools to recognize attack consequences and convey them to various stakeholders. In this paper, we used natural language processing (NLP) and deep learning to analyze text descriptions of cyberattacks and predict their consequences. This can be useful to quickly analyze new attacks discovered in the wild, help security practitioners take requisite actions, and convey attack consequences to stakeholders in a simple way. In this work, we predicted the multilabels (availability, access control, confidentiality, integrity, and other) corresponding to each text description in MITRE’s CWE dataset. We compared the performance of various CNN and LSTM deep neural networks in predicting these labels. The results indicate that it is possible to predict multilabels using a LSTM deep neural network with multiple output layers equal to the number of labels. LSTM performance was better when compared to CNN models.","PeriodicalId":128160,"journal":{"name":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA55696.2022.00174","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Threat modeling is a process by which security designers and researchers analyze the security of a system against known threats and vulnerabilities. There is a myriad of threat intelligence and vulnerability databases that security experts use to make important day-to-day decisions. Security experts and incident responders require the right set of skills and tools to recognize attack consequences and convey them to various stakeholders. In this paper, we used natural language processing (NLP) and deep learning to analyze text descriptions of cyberattacks and predict their consequences. This can be useful to quickly analyze new attacks discovered in the wild, help security practitioners take requisite actions, and convey attack consequences to stakeholders in a simple way. In this work, we predicted the multilabels (availability, access control, confidentiality, integrity, and other) corresponding to each text description in MITRE’s CWE dataset. We compared the performance of various CNN and LSTM deep neural networks in predicting these labels. The results indicate that it is possible to predict multilabels using a LSTM deep neural network with multiple output layers equal to the number of labels. LSTM performance was better when compared to CNN models.
我们能预测网络攻击的后果吗?
威胁建模是安全设计人员和研究人员根据已知威胁和漏洞分析系统安全性的过程。有无数的威胁情报和漏洞数据库,安全专家使用它们来做出重要的日常决策。安全专家和事件响应人员需要正确的技能和工具集来识别攻击后果并将其传达给各种涉众。在本文中,我们使用自然语言处理(NLP)和深度学习来分析网络攻击的文本描述并预测其后果。这有助于快速分析在野外发现的新攻击,帮助安全从业者采取必要的行动,并以简单的方式将攻击后果传达给涉众。在这项工作中,我们预测了MITRE的CWE数据集中每个文本描述对应的多标签(可用性、访问控制、机密性、完整性等)。我们比较了各种CNN和LSTM深度神经网络在预测这些标签方面的性能。结果表明,使用LSTM深度神经网络进行多标签预测是可能的,其中多个输出层等于标签的数量。与CNN模型相比,LSTM的性能更好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信