Explainable deep learning framework for brain tumor detection: Integrating LIME, Grad-CAM, and SHAP for enhanced accuracy

IF 2.3 4区 医学 Q3 ENGINEERING, BIOMEDICAL
Abdurrahim Akgündoğdu , Şerife Çelikbaş
{"title":"Explainable deep learning framework for brain tumor detection: Integrating LIME, Grad-CAM, and SHAP for enhanced accuracy","authors":"Abdurrahim Akgündoğdu ,&nbsp;Şerife Çelikbaş","doi":"10.1016/j.medengphy.2025.104405","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning approaches have improved disease diagnosis efficiency. However, AI-based decision systems lack sufficient transparency and interpretability. This study aims to enhance the explainability and training performance of deep learning models using explainable artificial intelligence (XAI) techniques for brain tumor detection. A two-stage training approach and XAI methods were implemented. The proposed convolutional neural network achieved 97.20% accuracy, 98.00% sensitivity, 96.40% specificity, and 98.90% ROC-AUC on the BRATS2019 dataset. It was analyzed with explainability techniques including Local Interpretable Model-Agnostic Explanations (LIME), Gradient-weighted Class Activation Mapping (Grad-CAM), and Shapley Additive Explanations (SHAP). The masks generated from these analyses enhanced the dataset, leading to a higher accuracy of 99.40%, 99.20% sensitivity, 99.60% specificity, 99.60% precision, and 99.90% ROC-AUC in the final stage. The integration of LIME, Grad-CAM, and SHAP showed significant success by increasing the accuracy performance of the model from 97.20% to 99.40%. Furthermore, the model was evaluated for fidelity, stability, and consistency and showed reliable and stable results. The same strategy was applied to the BR35H dataset to test the generalizability of the model, and the accuracy increased from 96.80% to 99.80% on this dataset as well, supporting the effectiveness of the method on different data sources.</div></div>","PeriodicalId":49836,"journal":{"name":"Medical Engineering & Physics","volume":"144 ","pages":"Article 104405"},"PeriodicalIF":2.3000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Engineering & Physics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1350453325001249","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning approaches have improved disease diagnosis efficiency. However, AI-based decision systems lack sufficient transparency and interpretability. This study aims to enhance the explainability and training performance of deep learning models using explainable artificial intelligence (XAI) techniques for brain tumor detection. A two-stage training approach and XAI methods were implemented. The proposed convolutional neural network achieved 97.20% accuracy, 98.00% sensitivity, 96.40% specificity, and 98.90% ROC-AUC on the BRATS2019 dataset. It was analyzed with explainability techniques including Local Interpretable Model-Agnostic Explanations (LIME), Gradient-weighted Class Activation Mapping (Grad-CAM), and Shapley Additive Explanations (SHAP). The masks generated from these analyses enhanced the dataset, leading to a higher accuracy of 99.40%, 99.20% sensitivity, 99.60% specificity, 99.60% precision, and 99.90% ROC-AUC in the final stage. The integration of LIME, Grad-CAM, and SHAP showed significant success by increasing the accuracy performance of the model from 97.20% to 99.40%. Furthermore, the model was evaluated for fidelity, stability, and consistency and showed reliable and stable results. The same strategy was applied to the BR35H dataset to test the generalizability of the model, and the accuracy increased from 96.80% to 99.80% on this dataset as well, supporting the effectiveness of the method on different data sources.
用于脑肿瘤检测的可解释的深度学习框架:整合LIME, Grad-CAM和SHAP以提高准确性
深度学习方法提高了疾病诊断效率。然而,基于人工智能的决策系统缺乏足够的透明度和可解释性。本研究旨在利用可解释人工智能(XAI)技术增强深度学习模型的可解释性和训练性能,用于脑肿瘤检测。采用两阶段训练方法和XAI方法。本文提出的卷积神经网络在BRATS2019数据集上的准确率为97.20%,灵敏度为98.00%,特异性为96.40%,ROC-AUC为98.90%。采用局部可解释模型不可知解释(LIME)、梯度加权类激活映射(Grad-CAM)和Shapley加性解释(SHAP)等可解释性技术对其进行分析。从这些分析中生成的掩模增强了数据集,在最后阶段达到99.40%的准确率、99.20%的灵敏度、99.60%的特异性、99.60%的精度和99.90%的ROC-AUC。LIME、Grad-CAM和SHAP的整合取得了显著的成功,将模型的准确率从97.20%提高到99.40%。对模型进行保真度、稳定性和一致性评价,结果可靠稳定。将相同的策略应用于BR35H数据集,测试模型的泛化性,该数据集的准确率也从96.80%提高到99.80%,支持了该方法在不同数据源上的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical Engineering & Physics
Medical Engineering & Physics 工程技术-工程:生物医学
CiteScore
4.30
自引率
4.50%
发文量
172
审稿时长
3.0 months
期刊介绍: Medical Engineering & Physics provides a forum for the publication of the latest developments in biomedical engineering, and reflects the essential multidisciplinary nature of the subject. The journal publishes in-depth critical reviews, scientific papers and technical notes. Our focus encompasses the application of the basic principles of physics and engineering to the development of medical devices and technology, with the ultimate aim of producing improvements in the quality of health care.Topics covered include biomechanics, biomaterials, mechanobiology, rehabilitation engineering, biomedical signal processing and medical device development. Medical Engineering & Physics aims to keep both engineers and clinicians abreast of the latest applications of technology to health care.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信