一种新的可解释的医学图像分类AI框架,集成了统计、视觉和基于规则的方法

IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Naeem Ullah , Florentina Guzmán-Aroca , Francisco Martínez-Álvarez , Ivanoe De Falco , Giovanna Sannino
{"title":"一种新的可解释的医学图像分类AI框架,集成了统计、视觉和基于规则的方法","authors":"Naeem Ullah ,&nbsp;Florentina Guzmán-Aroca ,&nbsp;Francisco Martínez-Álvarez ,&nbsp;Ivanoe De Falco ,&nbsp;Giovanna Sannino","doi":"10.1016/j.media.2025.103665","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence and deep learning are powerful tools for extracting knowledge from large datasets, particularly in healthcare. However, their black-box nature raises interpretability concerns, especially in high-stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization or rule-based explanations, limiting interpretability’s depth and clarity. This work proposes a novel explainable AI method specifically designed for medical image analysis, integrating statistical, visual, and rule-based explanations to improve transparency in deep learning models. Statistical features are derived from deep features extracted using a custom Mobilenetv2 model. A two-step feature selection method – zero-based filtering with mutual importance selection – ranks and refines these features. Decision tree and RuleFit models are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature map overlay visualization generates heatmap-like representations of three key statistical measures (mean, skewness, and entropy), providing both localized and quantifiable visual explanations of model decisions. The proposed method has been validated on five medical imaging datasets – COVID-19 radiography, ultrasound breast cancer, brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images – with results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability for medical image classification tasks.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"105 ","pages":"Article 103665"},"PeriodicalIF":11.8000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods\",\"authors\":\"Naeem Ullah ,&nbsp;Florentina Guzmán-Aroca ,&nbsp;Francisco Martínez-Álvarez ,&nbsp;Ivanoe De Falco ,&nbsp;Giovanna Sannino\",\"doi\":\"10.1016/j.media.2025.103665\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Artificial intelligence and deep learning are powerful tools for extracting knowledge from large datasets, particularly in healthcare. However, their black-box nature raises interpretability concerns, especially in high-stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization or rule-based explanations, limiting interpretability’s depth and clarity. This work proposes a novel explainable AI method specifically designed for medical image analysis, integrating statistical, visual, and rule-based explanations to improve transparency in deep learning models. Statistical features are derived from deep features extracted using a custom Mobilenetv2 model. A two-step feature selection method – zero-based filtering with mutual importance selection – ranks and refines these features. Decision tree and RuleFit models are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature map overlay visualization generates heatmap-like representations of three key statistical measures (mean, skewness, and entropy), providing both localized and quantifiable visual explanations of model decisions. The proposed method has been validated on five medical imaging datasets – COVID-19 radiography, ultrasound breast cancer, brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images – with results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability for medical image classification tasks.</div></div>\",\"PeriodicalId\":18328,\"journal\":{\"name\":\"Medical image analysis\",\"volume\":\"105 \",\"pages\":\"Article 103665\"},\"PeriodicalIF\":11.8000,\"publicationDate\":\"2025-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical image analysis\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1361841525002129\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525002129","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

人工智能和深度学习是从大型数据集中提取知识的强大工具,特别是在医疗保健领域。然而,它们的黑箱性质引起了可解释性问题,特别是在高风险应用程序中。现有的可解释人工智能方法通常只关注可视化或基于规则的解释,限制了可解释性的深度和清晰度。这项工作提出了一种新的可解释的人工智能方法,专门为医学图像分析设计,集成了统计、视觉和基于规则的解释,以提高深度学习模型的透明度。统计特征来源于使用自定义Mobilenetv2模型提取的深层特征。一种两步特征选择方法-基于互重要度选择的零基滤波-对这些特征进行排序和细化。采用决策树和规则fit模型对数据进行分类并提取人类可读的规则。此外,一种新的统计特征映射叠加可视化生成了三个关键统计度量(平均值、偏度和熵)的类似热图的表示,为模型决策提供了局部和可量化的视觉解释。该方法已在5个医学成像数据集(COVID-19 x线片、超声乳腺癌、脑肿瘤磁共振成像、肺癌和结肠癌组织病理学和青光眼图像)上进行了验证,并得到了医学专家的证实,证明了其在增强医学图像分类任务的可解释性方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods

A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods
Artificial intelligence and deep learning are powerful tools for extracting knowledge from large datasets, particularly in healthcare. However, their black-box nature raises interpretability concerns, especially in high-stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization or rule-based explanations, limiting interpretability’s depth and clarity. This work proposes a novel explainable AI method specifically designed for medical image analysis, integrating statistical, visual, and rule-based explanations to improve transparency in deep learning models. Statistical features are derived from deep features extracted using a custom Mobilenetv2 model. A two-step feature selection method – zero-based filtering with mutual importance selection – ranks and refines these features. Decision tree and RuleFit models are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature map overlay visualization generates heatmap-like representations of three key statistical measures (mean, skewness, and entropy), providing both localized and quantifiable visual explanations of model decisions. The proposed method has been validated on five medical imaging datasets – COVID-19 radiography, ultrasound breast cancer, brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images – with results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability for medical image classification tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信