Federated Learning of Explainable AI(FedXAI) for deep learning-based intrusion detection in IoT networks

IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Rajesh Kalakoti , Sven Nõmm , Hayretdin Bahsi
{"title":"Federated Learning of Explainable AI(FedXAI) for deep learning-based intrusion detection in IoT networks","authors":"Rajesh Kalakoti ,&nbsp;Sven Nõmm ,&nbsp;Hayretdin Bahsi","doi":"10.1016/j.comnet.2025.111479","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid growth of Internet of Things(IoT) devices has increased their vulnerability to botnet attacks, posing serious network security challenges. While deep learning models within federated learning (FL) can detect such threats while preserving privacy, their black-box nature limits interpretability, crucial for trust in security systems. Integrating explainable AI (XAI) into FL is significantly challenging, as many XAI methods require access to client data to interpret the behaviour of the global model on the server side. In this study, we propose a Federated Learning of Explainable AI (FedXAI) framework for binary and multiclass classification (botnet type and attack type) to perform intrusion detection in IoT devices. We incorporate one of the widely known XAI methods, SHAP (SHapley Additive exPlanations), into the detection framework. Specifically, we propose a privacy-preserving method in which the server securely aggregates SHAP value-based explanations from local models on the client side to approximate explanations for the global model on the server, without accessing any client data. Our evaluation demonstrates that the securely aggregated client-side explanations closely approximate the global model explanations generated when the server has access to client data. Our FL framework utilises a long-short-term memory (LSTM) network in a horizontal FL setup with the FedAvg (federated averaging) aggregation algorithm, achieving high detection performance for botnet detection in all binary and multiclass classification tasks. Additionally, we evaluated post-hoc explanations for local models client-side using LIME (Local Interpretable Model-Agnostic Explanations), Integrated Gradients(IG), and SHAP, with SHAP performing better based on metrics like Faithfulness, Complexity, Monotonicity, and Robustness. This study demonstrates that it is possible to achieve a high-performing FL model that addresses both explainability and privacy in the same framework for intrusion detection in IoT networks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"270 ","pages":"Article 111479"},"PeriodicalIF":4.6000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625004463","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid growth of Internet of Things(IoT) devices has increased their vulnerability to botnet attacks, posing serious network security challenges. While deep learning models within federated learning (FL) can detect such threats while preserving privacy, their black-box nature limits interpretability, crucial for trust in security systems. Integrating explainable AI (XAI) into FL is significantly challenging, as many XAI methods require access to client data to interpret the behaviour of the global model on the server side. In this study, we propose a Federated Learning of Explainable AI (FedXAI) framework for binary and multiclass classification (botnet type and attack type) to perform intrusion detection in IoT devices. We incorporate one of the widely known XAI methods, SHAP (SHapley Additive exPlanations), into the detection framework. Specifically, we propose a privacy-preserving method in which the server securely aggregates SHAP value-based explanations from local models on the client side to approximate explanations for the global model on the server, without accessing any client data. Our evaluation demonstrates that the securely aggregated client-side explanations closely approximate the global model explanations generated when the server has access to client data. Our FL framework utilises a long-short-term memory (LSTM) network in a horizontal FL setup with the FedAvg (federated averaging) aggregation algorithm, achieving high detection performance for botnet detection in all binary and multiclass classification tasks. Additionally, we evaluated post-hoc explanations for local models client-side using LIME (Local Interpretable Model-Agnostic Explanations), Integrated Gradients(IG), and SHAP, with SHAP performing better based on metrics like Faithfulness, Complexity, Monotonicity, and Robustness. This study demonstrates that it is possible to achieve a high-performing FL model that addresses both explainability and privacy in the same framework for intrusion detection in IoT networks.
可解释人工智能的联邦学习(FedXAI)用于物联网网络中基于深度学习的入侵检测
物联网(IoT)设备的快速增长增加了其易受僵尸网络攻击的脆弱性,构成了严重的网络安全挑战。虽然联邦学习(FL)中的深度学习模型可以在保护隐私的同时检测到此类威胁,但它们的黑箱性质限制了可解释性,这对安全系统的信任至关重要。将可解释AI (XAI)集成到FL中是非常具有挑战性的,因为许多XAI方法需要访问客户端数据来解释服务器端全局模型的行为。在本研究中,我们提出了一个可解释人工智能的联邦学习(FedXAI)框架,用于二进制和多类分类(僵尸网络类型和攻击类型),以在物联网设备中执行入侵检测。我们将广为人知的XAI方法之一SHAP (SHapley Additive explained)纳入检测框架。具体来说,我们提出了一种隐私保护方法,在该方法中,服务器在不访问任何客户端数据的情况下,安全地聚合来自客户端本地模型的基于SHAP值的解释,以近似服务器上全局模型的解释。我们的评估表明,安全聚合的客户端解释与服务器访问客户端数据时生成的全局模型解释非常接近。我们的FL框架在水平FL设置中使用长短期记忆(LSTM)网络,并使用FedAvg(联邦平均)聚合算法,在所有二进制和多类分类任务中实现高检测性能的僵尸网络检测。此外,我们使用LIME(本地可解释模型不可知解释)、集成梯度(IG)和SHAP对客户端本地模型的事后解释进行了评估,SHAP在可信度、复杂性、单调性和鲁棒性等指标上表现更好。本研究表明,在物联网网络入侵检测的同一框架中,可以实现高性能的FL模型,解决可解释性和隐私性问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Networks
Computer Networks 工程技术-电信学
CiteScore
10.80
自引率
3.60%
发文量
434
审稿时长
8.6 months
期刊介绍: Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信