Machine Learning Explainability for Intrusion Detection in the Industrial Internet of Things

Love Allen Chijioke Ahakonye, C. I. Nwakanma, Jae Min Lee, Dong‐Seong Kim
{"title":"Machine Learning Explainability for Intrusion Detection in the Industrial Internet of Things","authors":"Love Allen Chijioke Ahakonye, C. I. Nwakanma, Jae Min Lee, Dong‐Seong Kim","doi":"10.1109/IOTM.001.2300171","DOIUrl":null,"url":null,"abstract":"Intrusion and attacks have consistently challenged the Industrial Internet of Things (IIoT). Although artificial intelligence (AI) rapidly develops in attack detection and mitigation, building convincing trust is difficult due to its black-box nature. Its unexplained outcome inhibits informed and adequate decision-making of the experts and stakeholders. Explainable AI (XAI) has emerged to help with this problem. However, the ease of comprehensibility of XAI interpretation remains an issue due to the complexity and reliance on statistical theories. This study integrates agnostic post-hoc LIME and SHAP explainability approaches on intrusion detection systems built using representative AI models to explain the model's decisions and provide more insights into interpretability. The decision and confidence impact ratios assessed the significance of features and model dependencies, enhancing cybersecurity experts' trust and informed decisions. The experimental findings highlight the importance of these explainability techniques for understanding and mitigating IIoT intrusions with recourse to significant data features and model decisions.","PeriodicalId":235472,"journal":{"name":"IEEE Internet of Things Magazine","volume":"46 10","pages":"68-74"},"PeriodicalIF":0.0000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Magazine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IOTM.001.2300171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Intrusion and attacks have consistently challenged the Industrial Internet of Things (IIoT). Although artificial intelligence (AI) rapidly develops in attack detection and mitigation, building convincing trust is difficult due to its black-box nature. Its unexplained outcome inhibits informed and adequate decision-making of the experts and stakeholders. Explainable AI (XAI) has emerged to help with this problem. However, the ease of comprehensibility of XAI interpretation remains an issue due to the complexity and reliance on statistical theories. This study integrates agnostic post-hoc LIME and SHAP explainability approaches on intrusion detection systems built using representative AI models to explain the model's decisions and provide more insights into interpretability. The decision and confidence impact ratios assessed the significance of features and model dependencies, enhancing cybersecurity experts' trust and informed decisions. The experimental findings highlight the importance of these explainability techniques for understanding and mitigating IIoT intrusions with recourse to significant data features and model decisions.
工业物联网入侵检测的机器学习可解释性
入侵和攻击一直是工业物联网(IIoT)面临的挑战。虽然人工智能(AI)在攻击检测和缓解方面发展迅速,但由于其黑箱性质,很难建立令人信服的信任。其无法解释的结果阻碍了专家和利益相关者做出明智而充分的决策。可解释人工智能(XAI)的出现有助于解决这一问题。然而,由于其复杂性和对统计理论的依赖,XAI 解释的易懂性仍是一个问题。本研究将不可知论的事后 LIME 和 SHAP 可解释性方法整合到使用代表性人工智能模型构建的入侵检测系统中,以解释模型的决策,并为可解释性提供更多见解。决策和置信度影响比评估了特征和模型依赖关系的重要性,增强了网络安全专家的信任度和知情决策。实验结果凸显了这些可解释性技术对于利用重要数据特征和模型决策来理解和缓解物联网入侵的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信