{"title":"SHAP-Driven Feature Analysis Approach for Epileptic Seizure Prediction.","authors":"Mohsin Hasan, Wenjuan Wu, Xufeng Zhao","doi":"10.1007/s10916-025-02211-1","DOIUrl":null,"url":null,"abstract":"<p><p>Predicting epileptic seizures presents a substantial difficulty in healthcare, with considerable implications for enhancing patient outcomes and quality of life. This paper presents an explainable artificial intelligence (AI) that integrates a one-dimensional convolutional neural network (1D-CNN) with SHapley Additive exPlanations (SHAP). The approach facilitates precise and interpretable seizure prediction utilising electroencephalography (EEG) inputs. The suggested 1D-CNN model with SHAP attains superior performance, exhibiting an accuracy of 98.14% and an F1-score of 98.30% with feature-level explainability and high clinical insight using the CHB-MIT dataset. Through the computation and aggregation of SHAP values across time, we identified the most significant EEG channels, specifically \"P7-O1\" and \"P3-O1\", as essential for seizure detection. This transparency is crucial for building practitioners' trust and acceptance of the use of artificial intelligence-based solutions in the clinical domain. The technique can readily operate within portable EEG structures and hospital monitoring systems, triggering real-time alerts to patients. The outcome provides a timely intervention that could include anything from medication adjustments to responses in emergencies, preventing potential injury and improving safety. So, SHAP not only explains the model's predictions, but it also check and improve how much it relies on certain features, which makes it more reliable. Additionally, SHAP's interpretability aids physicians in understanding why the model arrived at its conclusions, increasing trust in the predictions and encouraging its extensive utilisation in diagnostic processes.</p>","PeriodicalId":16338,"journal":{"name":"Journal of Medical Systems","volume":"49 1","pages":"77"},"PeriodicalIF":5.7000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10916-025-02211-1","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
Predicting epileptic seizures presents a substantial difficulty in healthcare, with considerable implications for enhancing patient outcomes and quality of life. This paper presents an explainable artificial intelligence (AI) that integrates a one-dimensional convolutional neural network (1D-CNN) with SHapley Additive exPlanations (SHAP). The approach facilitates precise and interpretable seizure prediction utilising electroencephalography (EEG) inputs. The suggested 1D-CNN model with SHAP attains superior performance, exhibiting an accuracy of 98.14% and an F1-score of 98.30% with feature-level explainability and high clinical insight using the CHB-MIT dataset. Through the computation and aggregation of SHAP values across time, we identified the most significant EEG channels, specifically "P7-O1" and "P3-O1", as essential for seizure detection. This transparency is crucial for building practitioners' trust and acceptance of the use of artificial intelligence-based solutions in the clinical domain. The technique can readily operate within portable EEG structures and hospital monitoring systems, triggering real-time alerts to patients. The outcome provides a timely intervention that could include anything from medication adjustments to responses in emergencies, preventing potential injury and improving safety. So, SHAP not only explains the model's predictions, but it also check and improve how much it relies on certain features, which makes it more reliable. Additionally, SHAP's interpretability aids physicians in understanding why the model arrived at its conclusions, increasing trust in the predictions and encouraging its extensive utilisation in diagnostic processes.
期刊介绍:
Journal of Medical Systems provides a forum for the presentation and discussion of the increasingly extensive applications of new systems techniques and methods in hospital clinic and physician''s office administration; pathology radiology and pharmaceutical delivery systems; medical records storage and retrieval; and ancillary patient-support systems. The journal publishes informative articles essays and studies across the entire scale of medical systems from large hospital programs to novel small-scale medical services. Education is an integral part of this amalgamation of sciences and selected articles are published in this area. Since existing medical systems are constantly being modified to fit particular circumstances and to solve specific problems the journal includes a special section devoted to status reports on current installations.