{"title":"Explainable AI-Based Intrusion Detection Systems for Cloud and IoT","authors":"M. Gaitan-Cardenas, Mahmoud Abdelsalam, K. Roy","doi":"10.1109/ICCCN58024.2023.10230177","DOIUrl":null,"url":null,"abstract":"Recently, machine learning (ML) has been used extensively for intrusion detection systems (IDS), which proved to be very effective in various environments such as the Cloud and IoT. To achieve higher detection rates, ML models that are used for intrusion detection became very sophisticated. This complexity can be seen for both traditional ML models as well as deep learning models. However, due to their complexity, the decisions that are made by such ML-based IDS are very hard to analyze, understand and interpret. In turn, even though, ML-based IDS are very effective, they are becoming less transparent. As such, many of the proposed intrusion detection methods have not been deployed in real world applications because of the lack of explanation and trustworthiness. In this paper, we provide explanation and analysis for ML-based IDS using the SHapley additive exPlanations (SHAP) explainability technique. We applied SHAP to various ML models such as Decision Trees (DT), Random Forest (RF), Logistic Regression (LR), and Feed Forward Neural Networks (FFNN). Further, we conducted our analysis based on NetFlow data collected from the Cloud, utilizing CIDDS-001 and CIDDS-002 datasets, and IoT, utilizing NF-ToN-IoT-v2 dataset.","PeriodicalId":132030,"journal":{"name":"2023 32nd International Conference on Computer Communications and Networks (ICCCN)","volume":"341 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 32nd International Conference on Computer Communications and Networks (ICCCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCN58024.2023.10230177","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, machine learning (ML) has been used extensively for intrusion detection systems (IDS), which proved to be very effective in various environments such as the Cloud and IoT. To achieve higher detection rates, ML models that are used for intrusion detection became very sophisticated. This complexity can be seen for both traditional ML models as well as deep learning models. However, due to their complexity, the decisions that are made by such ML-based IDS are very hard to analyze, understand and interpret. In turn, even though, ML-based IDS are very effective, they are becoming less transparent. As such, many of the proposed intrusion detection methods have not been deployed in real world applications because of the lack of explanation and trustworthiness. In this paper, we provide explanation and analysis for ML-based IDS using the SHapley additive exPlanations (SHAP) explainability technique. We applied SHAP to various ML models such as Decision Trees (DT), Random Forest (RF), Logistic Regression (LR), and Feed Forward Neural Networks (FFNN). Further, we conducted our analysis based on NetFlow data collected from the Cloud, utilizing CIDDS-001 and CIDDS-002 datasets, and IoT, utilizing NF-ToN-IoT-v2 dataset.