Naeem Ullah , Florentina Guzmán-Aroca , Francisco Martínez-Álvarez , Ivanoe De Falco , Giovanna Sannino
{"title":"一种新的可解释的医学图像分类AI框架,集成了统计、视觉和基于规则的方法","authors":"Naeem Ullah , Florentina Guzmán-Aroca , Francisco Martínez-Álvarez , Ivanoe De Falco , Giovanna Sannino","doi":"10.1016/j.media.2025.103665","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence and deep learning are powerful tools for extracting knowledge from large datasets, particularly in healthcare. However, their black-box nature raises interpretability concerns, especially in high-stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization or rule-based explanations, limiting interpretability’s depth and clarity. This work proposes a novel explainable AI method specifically designed for medical image analysis, integrating statistical, visual, and rule-based explanations to improve transparency in deep learning models. Statistical features are derived from deep features extracted using a custom Mobilenetv2 model. A two-step feature selection method – zero-based filtering with mutual importance selection – ranks and refines these features. Decision tree and RuleFit models are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature map overlay visualization generates heatmap-like representations of three key statistical measures (mean, skewness, and entropy), providing both localized and quantifiable visual explanations of model decisions. The proposed method has been validated on five medical imaging datasets – COVID-19 radiography, ultrasound breast cancer, brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images – with results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability for medical image classification tasks.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103665"},"PeriodicalIF":11.8000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods\",\"authors\":\"Naeem Ullah , Florentina Guzmán-Aroca , Francisco Martínez-Álvarez , Ivanoe De Falco , Giovanna Sannino\",\"doi\":\"10.1016/j.media.2025.103665\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Artificial intelligence and deep learning are powerful tools for extracting knowledge from large datasets, particularly in healthcare. However, their black-box nature raises interpretability concerns, especially in high-stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization or rule-based explanations, limiting interpretability’s depth and clarity. This work proposes a novel explainable AI method specifically designed for medical image analysis, integrating statistical, visual, and rule-based explanations to improve transparency in deep learning models. Statistical features are derived from deep features extracted using a custom Mobilenetv2 model. A two-step feature selection method – zero-based filtering with mutual importance selection – ranks and refines these features. Decision tree and RuleFit models are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature map overlay visualization generates heatmap-like representations of three key statistical measures (mean, skewness, and entropy), providing both localized and quantifiable visual explanations of model decisions. The proposed method has been validated on five medical imaging datasets – COVID-19 radiography, ultrasound breast cancer, brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images – with results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability for medical image classification tasks.</div></div>\",\"PeriodicalId\":18328,\"journal\":{\"name\":\"Medical image analysis\",\"volume\":\"105 \",\"pages\":\"Article 103665\"},\"PeriodicalIF\":11.8000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical image analysis\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1361841525002129\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525002129","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods
Artificial intelligence and deep learning are powerful tools for extracting knowledge from large datasets, particularly in healthcare. However, their black-box nature raises interpretability concerns, especially in high-stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization or rule-based explanations, limiting interpretability’s depth and clarity. This work proposes a novel explainable AI method specifically designed for medical image analysis, integrating statistical, visual, and rule-based explanations to improve transparency in deep learning models. Statistical features are derived from deep features extracted using a custom Mobilenetv2 model. A two-step feature selection method – zero-based filtering with mutual importance selection – ranks and refines these features. Decision tree and RuleFit models are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature map overlay visualization generates heatmap-like representations of three key statistical measures (mean, skewness, and entropy), providing both localized and quantifiable visual explanations of model decisions. The proposed method has been validated on five medical imaging datasets – COVID-19 radiography, ultrasound breast cancer, brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images – with results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability for medical image classification tasks.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.