Enhancing Pneumonia Diagnosis Through AI Interpretability: Comparative Analysis of Pixel-Level Interpretability and Grad-CAM on X-ray Imaging With VGG19

Mohammad Ennab;Hamid Mcheick
{"title":"Enhancing Pneumonia Diagnosis Through AI Interpretability: Comparative Analysis of Pixel-Level Interpretability and Grad-CAM on X-ray Imaging With VGG19","authors":"Mohammad Ennab;Hamid Mcheick","doi":"10.1109/OJCS.2025.3582726","DOIUrl":null,"url":null,"abstract":"Pneumonia is a leading cause of morbidity and mortality worldwide, necessitating timely and precise diagnosis for effective treatment. Chest X-rays are the primary diagnostic tool, but their interpretation demands substantial expertise. Recent advancements in AI have shown promise in enhancing pneumonia detection from X-ray images, yet the opacity of deep learning models raises concerns about their clinical adoption. Interpretability in AI models is vital for fostering trust among healthcare professionals by providing transparency in decision-making processes. This study conducts a comparative analysis of two interpretability methods, Pixel Level Interpretability (PLI) and Gradient-weighted Class Activation Mapping (Grad-CAM), in the context of pneumonia classification using VGG19 on X-ray datasets. The research includes an experiment involving three distinct X-ray datasets. VGG19 is applied to classify a query image, and both PLI and Grad-CAM are used to interpret the classification decisions. The study evaluates these interpretability methods across multiple dimensions: computational efficiency, diagnostic performance, explanation continuity, calibration accuracy, robustness to training parameters, and feedback from medical experts. Our findings aim to determine which interpretability technique offers a more clinically meaningful explanation, balancing computational feasibility and diagnostic reliability. This study contributes to the development of explainable AI in healthcare, supporting the integration of trustworthy AI systems in clinical environments for enhanced pneumonia diagnosis.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1155-1165"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11049939","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11049939/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Pneumonia is a leading cause of morbidity and mortality worldwide, necessitating timely and precise diagnosis for effective treatment. Chest X-rays are the primary diagnostic tool, but their interpretation demands substantial expertise. Recent advancements in AI have shown promise in enhancing pneumonia detection from X-ray images, yet the opacity of deep learning models raises concerns about their clinical adoption. Interpretability in AI models is vital for fostering trust among healthcare professionals by providing transparency in decision-making processes. This study conducts a comparative analysis of two interpretability methods, Pixel Level Interpretability (PLI) and Gradient-weighted Class Activation Mapping (Grad-CAM), in the context of pneumonia classification using VGG19 on X-ray datasets. The research includes an experiment involving three distinct X-ray datasets. VGG19 is applied to classify a query image, and both PLI and Grad-CAM are used to interpret the classification decisions. The study evaluates these interpretability methods across multiple dimensions: computational efficiency, diagnostic performance, explanation continuity, calibration accuracy, robustness to training parameters, and feedback from medical experts. Our findings aim to determine which interpretability technique offers a more clinically meaningful explanation, balancing computational feasibility and diagnostic reliability. This study contributes to the development of explainable AI in healthcare, supporting the integration of trustworthy AI systems in clinical environments for enhanced pneumonia diagnosis.
通过AI可解释性增强肺炎诊断:VGG19 x射线成像像素级可解释性与Grad-CAM的对比分析
肺炎是世界范围内发病和死亡的主要原因,需要及时和准确的诊断以进行有效治疗。胸部x光片是主要的诊断工具,但它们的解释需要大量的专业知识。人工智能的最新进展在增强x射线图像的肺炎检测方面显示出了希望,但深度学习模型的不透明性引发了人们对其临床应用的担忧。人工智能模型的可解释性对于通过在决策过程中提供透明度来培养医疗保健专业人员之间的信任至关重要。本研究在x射线数据集上使用VGG19进行肺炎分类的背景下,对像素级可解释性(PLI)和梯度加权类激活映射(Grad-CAM)两种可解释性方法进行了比较分析。这项研究包括一个涉及三个不同x射线数据集的实验。使用VGG19对查询图像进行分类,并使用PLI和Grad-CAM对分类决策进行解释。该研究从多个维度评估了这些可解释性方法:计算效率、诊断性能、解释连续性、校准准确性、对训练参数的鲁棒性以及医学专家的反馈。我们的研究结果旨在确定哪种可解释性技术提供更有临床意义的解释,平衡计算可行性和诊断可靠性。本研究有助于医疗保健中可解释的人工智能的发展,支持在临床环境中整合可信赖的人工智能系统,以增强肺炎诊断。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
12.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信