An explainable artificial intelligence model for multiple lung diseases classification from chest X-ray images using fine-tuned transfer learning

Eram Mahamud , Nafiz Fahad , Md Assaduzzaman , S.M. Zain , Kah Ong Michael Goh , Md. Kishor Morol
{"title":"An explainable artificial intelligence model for multiple lung diseases classification from chest X-ray images using fine-tuned transfer learning","authors":"Eram Mahamud ,&nbsp;Nafiz Fahad ,&nbsp;Md Assaduzzaman ,&nbsp;S.M. Zain ,&nbsp;Kah Ong Michael Goh ,&nbsp;Md. Kishor Morol","doi":"10.1016/j.dajour.2024.100499","DOIUrl":null,"url":null,"abstract":"<div><p>Traditional deep learning models are often considered “black boxes” due to their lack of interpretability, which limits their therapeutic use despite their success in classification tasks. This study aims to improve the interpretability of diagnoses for COVID-19, pneumonia, and tuberculosis from X-ray images using an enhanced DenseNet201 model within a transfer learning framework. We incorporated Explainable Artificial Intelligence (XAI) techniques, including SHAP, LIME, Grad-CAM, and Grad-CAM++, to make the model’s decisions more understandable. To enhance image clarity and detail, we applied preprocessing methods such as Denoising Autoencoder, Contrast Limited Adaptive Histogram Equalization (CLAHE), and Gamma Correction. An ablation study was conducted to identify the optimal parameters for the proposed approach. Our model’s performance was compared with other transfer learning-based models like EfficientNetB0, InceptionV3, and LeNet using evaluation metrics. The model that included data augmentation techniques achieved the best results, with an accuracy of 99.20%, and precision and recall of 99%. This demonstrates the critical role of data augmentation in improving model performance. SHAP and LIME provided significant insights into the model’s decision-making process, while Grad-CAM and Grad-CAM++ highlighted specific image features and regions influencing the model’s classifications. These techniques enhanced transparency and trust in AI-assisted diagnoses. Finally, we developed an Android-based system using the most effective model to support medical specialists in their decision-making process.</p></div>","PeriodicalId":100357,"journal":{"name":"Decision Analytics Journal","volume":"12 ","pages":"Article 100499"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772662224001036/pdfft?md5=b430bd720529ecff7f7f19d1a65e9d47&pid=1-s2.0-S2772662224001036-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Analytics Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772662224001036","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Traditional deep learning models are often considered “black boxes” due to their lack of interpretability, which limits their therapeutic use despite their success in classification tasks. This study aims to improve the interpretability of diagnoses for COVID-19, pneumonia, and tuberculosis from X-ray images using an enhanced DenseNet201 model within a transfer learning framework. We incorporated Explainable Artificial Intelligence (XAI) techniques, including SHAP, LIME, Grad-CAM, and Grad-CAM++, to make the model’s decisions more understandable. To enhance image clarity and detail, we applied preprocessing methods such as Denoising Autoencoder, Contrast Limited Adaptive Histogram Equalization (CLAHE), and Gamma Correction. An ablation study was conducted to identify the optimal parameters for the proposed approach. Our model’s performance was compared with other transfer learning-based models like EfficientNetB0, InceptionV3, and LeNet using evaluation metrics. The model that included data augmentation techniques achieved the best results, with an accuracy of 99.20%, and precision and recall of 99%. This demonstrates the critical role of data augmentation in improving model performance. SHAP and LIME provided significant insights into the model’s decision-making process, while Grad-CAM and Grad-CAM++ highlighted specific image features and regions influencing the model’s classifications. These techniques enhanced transparency and trust in AI-assisted diagnoses. Finally, we developed an Android-based system using the most effective model to support medical specialists in their decision-making process.

利用微调迁移学习从胸部 X 光图像中对多种肺部疾病进行分类的可解释人工智能模型
传统的深度学习模型由于缺乏可解释性,通常被认为是 "黑盒子",尽管它们在分类任务中取得了成功,但却限制了它们的治疗用途。本研究旨在利用迁移学习框架内的增强型 DenseNet201 模型,提高从 X 光图像诊断 COVID-19、肺炎和肺结核的可解释性。我们采用了可解释人工智能(XAI)技术,包括 SHAP、LIME、Grad-CAM 和 Grad-CAM++,使模型的决策更易于理解。为了提高图像的清晰度和细节,我们采用了去噪自动编码器、对比度受限自适应直方图均衡化(CLAHE)和伽马校正等预处理方法。我们进行了一项消融研究,以确定拟议方法的最佳参数。我们使用评估指标将模型的性能与其他基于迁移学习的模型(如 EfficientNetB0、InceptionV3 和 LeNet)进行了比较。包含数据增强技术的模型取得了最佳结果,准确率达到 99.20%,精确率和召回率均为 99%。这证明了数据增强在提高模型性能方面的关键作用。SHAP 和 LIME 为模型的决策过程提供了重要见解,而 Grad-CAM 和 Grad-CAM++ 则突出了影响模型分类的特定图像特征和区域。这些技术提高了人工智能辅助诊断的透明度和信任度。最后,我们利用最有效的模型开发了一个基于安卓的系统,以支持医学专家的决策过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信