Enhancing Pneumonia Diagnosis Through AI Interpretability: Comparative Analysis of Pixel-Level Interpretability and Grad-CAM on X-ray Imaging With VGG19
{"title":"Enhancing Pneumonia Diagnosis Through AI Interpretability: Comparative Analysis of Pixel-Level Interpretability and Grad-CAM on X-ray Imaging With VGG19","authors":"Mohammad Ennab;Hamid Mcheick","doi":"10.1109/OJCS.2025.3582726","DOIUrl":null,"url":null,"abstract":"Pneumonia is a leading cause of morbidity and mortality worldwide, necessitating timely and precise diagnosis for effective treatment. Chest X-rays are the primary diagnostic tool, but their interpretation demands substantial expertise. Recent advancements in AI have shown promise in enhancing pneumonia detection from X-ray images, yet the opacity of deep learning models raises concerns about their clinical adoption. Interpretability in AI models is vital for fostering trust among healthcare professionals by providing transparency in decision-making processes. This study conducts a comparative analysis of two interpretability methods, Pixel Level Interpretability (PLI) and Gradient-weighted Class Activation Mapping (Grad-CAM), in the context of pneumonia classification using VGG19 on X-ray datasets. The research includes an experiment involving three distinct X-ray datasets. VGG19 is applied to classify a query image, and both PLI and Grad-CAM are used to interpret the classification decisions. The study evaluates these interpretability methods across multiple dimensions: computational efficiency, diagnostic performance, explanation continuity, calibration accuracy, robustness to training parameters, and feedback from medical experts. Our findings aim to determine which interpretability technique offers a more clinically meaningful explanation, balancing computational feasibility and diagnostic reliability. This study contributes to the development of explainable AI in healthcare, supporting the integration of trustworthy AI systems in clinical environments for enhanced pneumonia diagnosis.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1155-1165"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11049939","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11049939/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Pneumonia is a leading cause of morbidity and mortality worldwide, necessitating timely and precise diagnosis for effective treatment. Chest X-rays are the primary diagnostic tool, but their interpretation demands substantial expertise. Recent advancements in AI have shown promise in enhancing pneumonia detection from X-ray images, yet the opacity of deep learning models raises concerns about their clinical adoption. Interpretability in AI models is vital for fostering trust among healthcare professionals by providing transparency in decision-making processes. This study conducts a comparative analysis of two interpretability methods, Pixel Level Interpretability (PLI) and Gradient-weighted Class Activation Mapping (Grad-CAM), in the context of pneumonia classification using VGG19 on X-ray datasets. The research includes an experiment involving three distinct X-ray datasets. VGG19 is applied to classify a query image, and both PLI and Grad-CAM are used to interpret the classification decisions. The study evaluates these interpretability methods across multiple dimensions: computational efficiency, diagnostic performance, explanation continuity, calibration accuracy, robustness to training parameters, and feedback from medical experts. Our findings aim to determine which interpretability technique offers a more clinically meaningful explanation, balancing computational feasibility and diagnostic reliability. This study contributes to the development of explainable AI in healthcare, supporting the integration of trustworthy AI systems in clinical environments for enhanced pneumonia diagnosis.