{"title":"下一代物联网医疗中可解释的基于ai的恶意流量检测与监控系统","authors":"Ece Gürbüz, Özlem Turgut, Ibrahim Kök","doi":"10.1109/SmartNets58706.2023.10215896","DOIUrl":null,"url":null,"abstract":"In recent years, there has been a surge in IoT healthcare applications, ranging from wearable health monitors and remote patient monitoring systems to smart medical devices, telemedicine platforms, and personalized health tracking and management tools. The purpose of these applications is to improve treatment outcomes, streamline healthcare delivery, and enable data-driven decision-making. However, due to the sensitive nature of health data and the critical role that these applications play in people’s lives, ensuring their security and privacy has become a paramount concern. To address this issue, we developed an explainable malicious traffic detection and monitoring system based on Machine Learning (ML) and Deep Learning (DL) models. The proposed system involves the use of Explainable Artificial Intelligence (XAI) methods such as LIME, SHAP, ELI5, and Integrated Gradients(IG) to ensure the interpretability and explainability of the developed models. Finally, we demonstrate the high accuracy of the developed models in detecting attacks on the intensive care patient dataset. Furthermore, we ensure the transparency and interpretability of the model outcomes by presenting them through the Shapash Monitor interface, which can be easily accessed by both experts and non-experts.","PeriodicalId":301834,"journal":{"name":"2023 International Conference on Smart Applications, Communications and Networking (SmartNets)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable AI-Based Malicious Traffic Detection and Monitoring System in Next-Gen IoT Healthcare\",\"authors\":\"Ece Gürbüz, Özlem Turgut, Ibrahim Kök\",\"doi\":\"10.1109/SmartNets58706.2023.10215896\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, there has been a surge in IoT healthcare applications, ranging from wearable health monitors and remote patient monitoring systems to smart medical devices, telemedicine platforms, and personalized health tracking and management tools. The purpose of these applications is to improve treatment outcomes, streamline healthcare delivery, and enable data-driven decision-making. However, due to the sensitive nature of health data and the critical role that these applications play in people’s lives, ensuring their security and privacy has become a paramount concern. To address this issue, we developed an explainable malicious traffic detection and monitoring system based on Machine Learning (ML) and Deep Learning (DL) models. The proposed system involves the use of Explainable Artificial Intelligence (XAI) methods such as LIME, SHAP, ELI5, and Integrated Gradients(IG) to ensure the interpretability and explainability of the developed models. Finally, we demonstrate the high accuracy of the developed models in detecting attacks on the intensive care patient dataset. Furthermore, we ensure the transparency and interpretability of the model outcomes by presenting them through the Shapash Monitor interface, which can be easily accessed by both experts and non-experts.\",\"PeriodicalId\":301834,\"journal\":{\"name\":\"2023 International Conference on Smart Applications, Communications and Networking (SmartNets)\",\"volume\":\"97 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Smart Applications, Communications and Networking (SmartNets)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SmartNets58706.2023.10215896\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Smart Applications, Communications and Networking (SmartNets)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartNets58706.2023.10215896","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable AI-Based Malicious Traffic Detection and Monitoring System in Next-Gen IoT Healthcare
In recent years, there has been a surge in IoT healthcare applications, ranging from wearable health monitors and remote patient monitoring systems to smart medical devices, telemedicine platforms, and personalized health tracking and management tools. The purpose of these applications is to improve treatment outcomes, streamline healthcare delivery, and enable data-driven decision-making. However, due to the sensitive nature of health data and the critical role that these applications play in people’s lives, ensuring their security and privacy has become a paramount concern. To address this issue, we developed an explainable malicious traffic detection and monitoring system based on Machine Learning (ML) and Deep Learning (DL) models. The proposed system involves the use of Explainable Artificial Intelligence (XAI) methods such as LIME, SHAP, ELI5, and Integrated Gradients(IG) to ensure the interpretability and explainability of the developed models. Finally, we demonstrate the high accuracy of the developed models in detecting attacks on the intensive care patient dataset. Furthermore, we ensure the transparency and interpretability of the model outcomes by presenting them through the Shapash Monitor interface, which can be easily accessed by both experts and non-experts.