{"title":"Explainable deep learning framework for brain tumor detection: Integrating LIME, Grad-CAM, and SHAP for enhanced accuracy","authors":"Abdurrahim Akgündoğdu , Şerife Çelikbaş","doi":"10.1016/j.medengphy.2025.104405","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning approaches have improved disease diagnosis efficiency. However, AI-based decision systems lack sufficient transparency and interpretability. This study aims to enhance the explainability and training performance of deep learning models using explainable artificial intelligence (XAI) techniques for brain tumor detection. A two-stage training approach and XAI methods were implemented. The proposed convolutional neural network achieved 97.20% accuracy, 98.00% sensitivity, 96.40% specificity, and 98.90% ROC-AUC on the BRATS2019 dataset. It was analyzed with explainability techniques including Local Interpretable Model-Agnostic Explanations (LIME), Gradient-weighted Class Activation Mapping (Grad-CAM), and Shapley Additive Explanations (SHAP). The masks generated from these analyses enhanced the dataset, leading to a higher accuracy of 99.40%, 99.20% sensitivity, 99.60% specificity, 99.60% precision, and 99.90% ROC-AUC in the final stage. The integration of LIME, Grad-CAM, and SHAP showed significant success by increasing the accuracy performance of the model from 97.20% to 99.40%. Furthermore, the model was evaluated for fidelity, stability, and consistency and showed reliable and stable results. The same strategy was applied to the BR35H dataset to test the generalizability of the model, and the accuracy increased from 96.80% to 99.80% on this dataset as well, supporting the effectiveness of the method on different data sources.</div></div>","PeriodicalId":49836,"journal":{"name":"Medical Engineering & Physics","volume":"144 ","pages":"Article 104405"},"PeriodicalIF":2.3000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Engineering & Physics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350453325001249","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning approaches have improved disease diagnosis efficiency. However, AI-based decision systems lack sufficient transparency and interpretability. This study aims to enhance the explainability and training performance of deep learning models using explainable artificial intelligence (XAI) techniques for brain tumor detection. A two-stage training approach and XAI methods were implemented. The proposed convolutional neural network achieved 97.20% accuracy, 98.00% sensitivity, 96.40% specificity, and 98.90% ROC-AUC on the BRATS2019 dataset. It was analyzed with explainability techniques including Local Interpretable Model-Agnostic Explanations (LIME), Gradient-weighted Class Activation Mapping (Grad-CAM), and Shapley Additive Explanations (SHAP). The masks generated from these analyses enhanced the dataset, leading to a higher accuracy of 99.40%, 99.20% sensitivity, 99.60% specificity, 99.60% precision, and 99.90% ROC-AUC in the final stage. The integration of LIME, Grad-CAM, and SHAP showed significant success by increasing the accuracy performance of the model from 97.20% to 99.40%. Furthermore, the model was evaluated for fidelity, stability, and consistency and showed reliable and stable results. The same strategy was applied to the BR35H dataset to test the generalizability of the model, and the accuracy increased from 96.80% to 99.80% on this dataset as well, supporting the effectiveness of the method on different data sources.
期刊介绍:
Medical Engineering & Physics provides a forum for the publication of the latest developments in biomedical engineering, and reflects the essential multidisciplinary nature of the subject. The journal publishes in-depth critical reviews, scientific papers and technical notes. Our focus encompasses the application of the basic principles of physics and engineering to the development of medical devices and technology, with the ultimate aim of producing improvements in the quality of health care.Topics covered include biomechanics, biomaterials, mechanobiology, rehabilitation engineering, biomedical signal processing and medical device development. Medical Engineering & Physics aims to keep both engineers and clinicians abreast of the latest applications of technology to health care.