HEN:基于混合可解释神经网络的新型鲁棒网络入侵检测框架

IF 7.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, Xiaofeng Zou
{"title":"HEN:基于混合可解释神经网络的新型鲁棒网络入侵检测框架","authors":"Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, Xiaofeng Zou","doi":"10.1007/s11432-023-4067-x","DOIUrl":null,"url":null,"abstract":"<p>With the rapid development of network technology and the automation process for 5G, cyber-attacks have become increasingly complex and threatening. In response to these threats, researchers have developed various network intrusion detection systems (NIDS) to monitor network traffic. However, the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to improving the detection performance of NIDS. To address these issues, this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence (XAI) method. We effectively introduce the Shapley additive explanations (SHAP) method to explain a light gradient boosting machine (LightGBM) model. Additionally, we propose an autoencoder long-term short-term memory (AE-LSTM) network to reconstruct SHAP values previously generated. Furthermore, we define a threshold based on reconstruction errors observed during the training phase. Any network flow that surpasses the specified threshold is classified as an attack flow. This approach enhances the framework’s ability to accurately identify attacks. We achieve an accuracy of 92.65%, a recall of 95.26%, a precision of 92.57%, and an F1-score of 93.90% on the dataset NSL-KDD. Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.</p>","PeriodicalId":21618,"journal":{"name":"Science China Information Sciences","volume":"125 1","pages":""},"PeriodicalIF":7.3000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HEN: a novel hybrid explainable neural network based framework for robust network intrusion detection\",\"authors\":\"Wei Wei, Sijin Chen, Cen Chen, Heshi Wang, Jing Liu, Zhongyao Cheng, Xiaofeng Zou\",\"doi\":\"10.1007/s11432-023-4067-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With the rapid development of network technology and the automation process for 5G, cyber-attacks have become increasingly complex and threatening. In response to these threats, researchers have developed various network intrusion detection systems (NIDS) to monitor network traffic. However, the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to improving the detection performance of NIDS. To address these issues, this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence (XAI) method. We effectively introduce the Shapley additive explanations (SHAP) method to explain a light gradient boosting machine (LightGBM) model. Additionally, we propose an autoencoder long-term short-term memory (AE-LSTM) network to reconstruct SHAP values previously generated. Furthermore, we define a threshold based on reconstruction errors observed during the training phase. Any network flow that surpasses the specified threshold is classified as an attack flow. This approach enhances the framework’s ability to accurately identify attacks. We achieve an accuracy of 92.65%, a recall of 95.26%, a precision of 92.57%, and an F1-score of 93.90% on the dataset NSL-KDD. Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.</p>\",\"PeriodicalId\":21618,\"journal\":{\"name\":\"Science China Information Sciences\",\"volume\":\"125 1\",\"pages\":\"\"},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2024-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science China Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11432-023-4067-x\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science China Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11432-023-4067-x","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

随着网络技术和 5G 自动化进程的快速发展,网络攻击变得日益复杂和具有威胁性。为了应对这些威胁,研究人员开发了各种网络入侵检测系统(NIDS)来监控网络流量。然而,新攻击技术的不断涌现和系统可解释性的缺乏对提高 NIDS 的检测性能提出了挑战。为了解决这些问题,本文提出了一种基于可解释神经网络的混合框架,通过创新性地应用可解释人工智能(XAI)方法,提高了模型的可解释性和检测新攻击的性能。我们有效地引入了夏普利加法解释(SHAP)方法来解释光梯度提升机(LightGBM)模型。此外,我们还提出了一种自动编码器长期短期记忆(AE-LSTM)网络,用于重建之前生成的 SHAP 值。此外,我们还根据训练阶段观察到的重建误差定义了一个阈值。任何超过指定阈值的网络流都会被归类为攻击流。这种方法增强了框架准确识别攻击的能力。我们在数据集 NSL-KDD 上取得了 92.65% 的准确率、95.26% 的召回率、92.57% 的精确率和 93.90% 的 F1 分数。实验结果表明,我们的方法所产生的检测性能与最先进的方法相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
HEN: a novel hybrid explainable neural network based framework for robust network intrusion detection

With the rapid development of network technology and the automation process for 5G, cyber-attacks have become increasingly complex and threatening. In response to these threats, researchers have developed various network intrusion detection systems (NIDS) to monitor network traffic. However, the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to improving the detection performance of NIDS. To address these issues, this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence (XAI) method. We effectively introduce the Shapley additive explanations (SHAP) method to explain a light gradient boosting machine (LightGBM) model. Additionally, we propose an autoencoder long-term short-term memory (AE-LSTM) network to reconstruct SHAP values previously generated. Furthermore, we define a threshold based on reconstruction errors observed during the training phase. Any network flow that surpasses the specified threshold is classified as an attack flow. This approach enhances the framework’s ability to accurately identify attacks. We achieve an accuracy of 92.65%, a recall of 95.26%, a precision of 92.57%, and an F1-score of 93.90% on the dataset NSL-KDD. Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Science China Information Sciences
Science China Information Sciences COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
12.60
自引率
5.70%
发文量
224
审稿时长
8.3 months
期刊介绍: Science China Information Sciences is a dedicated journal that showcases high-quality, original research across various domains of information sciences. It encompasses Computer Science & Technologies, Control Science & Engineering, Information & Communication Engineering, Microelectronics & Solid-State Electronics, and Quantum Information, providing a platform for the dissemination of significant contributions in these fields.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信