针对网络入侵检测系统的对抗性黑盒攻击:综述

Huda Ali Alatwi, A. Aldweesh
{"title":"针对网络入侵检测系统的对抗性黑盒攻击:综述","authors":"Huda Ali Alatwi, A. Aldweesh","doi":"10.1109/AIIoT52608.2021.9454214","DOIUrl":null,"url":null,"abstract":"Due to their massive success in various domains, deep learning techniques are increasingly used to design network intrusion detection solutions that detect and mitigate unknown and known attacks with high accuracy detection rates and minimal feature engineering. However, it has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions socalled adversarial examples. Such vulnerability allows attackers to target NIDSs in a black-box setting by adding small crafty perturbations to the malicious traffic to evade detection and disrupt the system's critical functionalities. Yet, little researches have addressed the risks of black-box adversarial attacks against NIDS and proposed mitigation solutions. This survey explores this research problem and identifies open issues and certain areas that demand further research for considerable impacts.","PeriodicalId":443405,"journal":{"name":"2021 IEEE World AI IoT Congress (AIIoT)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey\",\"authors\":\"Huda Ali Alatwi, A. Aldweesh\",\"doi\":\"10.1109/AIIoT52608.2021.9454214\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Due to their massive success in various domains, deep learning techniques are increasingly used to design network intrusion detection solutions that detect and mitigate unknown and known attacks with high accuracy detection rates and minimal feature engineering. However, it has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions socalled adversarial examples. Such vulnerability allows attackers to target NIDSs in a black-box setting by adding small crafty perturbations to the malicious traffic to evade detection and disrupt the system's critical functionalities. Yet, little researches have addressed the risks of black-box adversarial attacks against NIDS and proposed mitigation solutions. This survey explores this research problem and identifies open issues and certain areas that demand further research for considerable impacts.\",\"PeriodicalId\":443405,\"journal\":{\"name\":\"2021 IEEE World AI IoT Congress (AIIoT)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE World AI IoT Congress (AIIoT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIIoT52608.2021.9454214\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE World AI IoT Congress (AIIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIIoT52608.2021.9454214","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

由于其在各个领域的巨大成功,深度学习技术越来越多地用于设计网络入侵检测解决方案,以高精度的检测率和最小的特征工程来检测和减轻未知和已知的攻击。然而,人们发现深度学习模型容易受到数据实例的影响,这些数据实例可能会误导模型做出错误的分类决策,即所谓的对抗性示例。这种漏洞允许攻击者通过在恶意流量中添加小的狡猾的扰动来逃避检测并破坏系统的关键功能,从而在黑盒设置中瞄准nids。然而,很少有研究涉及针对网络入侵防御系统的黑盒对抗性攻击的风险,并提出缓解解决方案。本调查探讨了这一研究问题,并确定了开放的问题和某些领域,需要进一步研究的重大影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey
Due to their massive success in various domains, deep learning techniques are increasingly used to design network intrusion detection solutions that detect and mitigate unknown and known attacks with high accuracy detection rates and minimal feature engineering. However, it has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions socalled adversarial examples. Such vulnerability allows attackers to target NIDSs in a black-box setting by adding small crafty perturbations to the malicious traffic to evade detection and disrupt the system's critical functionalities. Yet, little researches have addressed the risks of black-box adversarial attacks against NIDS and proposed mitigation solutions. This survey explores this research problem and identifies open issues and certain areas that demand further research for considerable impacts.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信