促进信任和可解释性:将可解释的人工智能(XAI)与机器学习相结合,以增强疾病预测和决策透明度。

IF 2.3 3区 医学 Q2 PATHOLOGY
Renuka Agrawal, Tawishi Gupta, Shaurya Gupta, Sakshi Chauhan, Prisha Patel, Safa Hamdare
{"title":"促进信任和可解释性:将可解释的人工智能(XAI)与机器学习相结合,以增强疾病预测和决策透明度。","authors":"Renuka Agrawal, Tawishi Gupta, Shaurya Gupta, Sakshi Chauhan, Prisha Patel, Safa Hamdare","doi":"10.1186/s13000-025-01686-3","DOIUrl":null,"url":null,"abstract":"<p><p>Medical healthcare has advanced substantially due to advancements in Artificial Intelligence (AI) techniques for early disease detection alongside support for clinical decisions. However, a gap exists in widespread adoption of results of these algorithms by public due to black box nature of models. The undisclosed nature of these systems creates fundamental obstacles within medical sectors that handle crucial cases because medical practitioners needs to understand the reasoning behind the outcome of a particular disease. A hybrid Machine Learning (ML) framework integrating Explainable AI (XAI) strategies that will improve both predictive performance and interpretability is explored in proposed work. The system leverages Decision Trees, Naive Bayes, Random Forests and XGBoost algorithms to predict the medical condition risks of Diabetes, Anaemia, Thalassemia, Heart Disease, Thrombocytopenia within its framework. SHAP (SHapley Additive exPlanations) together with LIME (Local Interpretable Model-agnostic Explanations) adds functionality to the proposed system by displaying important features contributing to each prediction. The framework upholds an accuracy of 99.2% besides the ability to provide understandable explanations for interpretation of model outputs. The performance combined with interpretability from the framework enables clinical practitioners to make decisions through an understanding of AI-generated outputs thereby reducing distrust in AI-driven healthcare.</p>","PeriodicalId":11237,"journal":{"name":"Diagnostic Pathology","volume":"20 1","pages":"105"},"PeriodicalIF":2.3000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12465982/pdf/","citationCount":"0","resultStr":"{\"title\":\"Fostering trust and interpretability: integrating explainable AI (XAI) with machine learning for enhanced disease prediction and decision transparency.\",\"authors\":\"Renuka Agrawal, Tawishi Gupta, Shaurya Gupta, Sakshi Chauhan, Prisha Patel, Safa Hamdare\",\"doi\":\"10.1186/s13000-025-01686-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Medical healthcare has advanced substantially due to advancements in Artificial Intelligence (AI) techniques for early disease detection alongside support for clinical decisions. However, a gap exists in widespread adoption of results of these algorithms by public due to black box nature of models. The undisclosed nature of these systems creates fundamental obstacles within medical sectors that handle crucial cases because medical practitioners needs to understand the reasoning behind the outcome of a particular disease. A hybrid Machine Learning (ML) framework integrating Explainable AI (XAI) strategies that will improve both predictive performance and interpretability is explored in proposed work. The system leverages Decision Trees, Naive Bayes, Random Forests and XGBoost algorithms to predict the medical condition risks of Diabetes, Anaemia, Thalassemia, Heart Disease, Thrombocytopenia within its framework. SHAP (SHapley Additive exPlanations) together with LIME (Local Interpretable Model-agnostic Explanations) adds functionality to the proposed system by displaying important features contributing to each prediction. The framework upholds an accuracy of 99.2% besides the ability to provide understandable explanations for interpretation of model outputs. The performance combined with interpretability from the framework enables clinical practitioners to make decisions through an understanding of AI-generated outputs thereby reducing distrust in AI-driven healthcare.</p>\",\"PeriodicalId\":11237,\"journal\":{\"name\":\"Diagnostic Pathology\",\"volume\":\"20 1\",\"pages\":\"105\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12465982/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Diagnostic Pathology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s13000-025-01686-3\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PATHOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Diagnostic Pathology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13000-025-01686-3","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PATHOLOGY","Score":null,"Total":0}
引用次数: 0

摘要

由于人工智能(AI)技术在早期疾病检测和临床决策支持方面的进步,医疗保健取得了实质性进展。然而,由于模型的黑箱性质,这些算法的结果在被公众广泛采用方面存在差距。这些系统的不公开性质给处理关键病例的医疗部门造成了根本障碍,因为医疗从业者需要了解特定疾病结果背后的原因。提出了一种集成可解释人工智能(XAI)策略的混合机器学习(ML)框架,该框架将提高预测性能和可解释性。该系统利用决策树、朴素贝叶斯、随机森林和XGBoost算法在其框架内预测糖尿病、贫血、地中海贫血、心脏病、血小板减少症的医疗状况风险。SHapley加性解释(SHapley Additive exPlanations)和LIME (Local Interpretable Model-agnostic exPlanations)通过显示对每个预测都有贡献的重要特征,为提出的系统增加了功能。除了能够为模型输出的解释提供可理解的解释外,该框架还保持了99.2%的准确率。性能与框架的可解释性相结合,使临床从业者能够通过理解人工智能生成的输出来做出决策,从而减少对人工智能驱动的医疗保健的不信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Fostering trust and interpretability: integrating explainable AI (XAI) with machine learning for enhanced disease prediction and decision transparency.

Medical healthcare has advanced substantially due to advancements in Artificial Intelligence (AI) techniques for early disease detection alongside support for clinical decisions. However, a gap exists in widespread adoption of results of these algorithms by public due to black box nature of models. The undisclosed nature of these systems creates fundamental obstacles within medical sectors that handle crucial cases because medical practitioners needs to understand the reasoning behind the outcome of a particular disease. A hybrid Machine Learning (ML) framework integrating Explainable AI (XAI) strategies that will improve both predictive performance and interpretability is explored in proposed work. The system leverages Decision Trees, Naive Bayes, Random Forests and XGBoost algorithms to predict the medical condition risks of Diabetes, Anaemia, Thalassemia, Heart Disease, Thrombocytopenia within its framework. SHAP (SHapley Additive exPlanations) together with LIME (Local Interpretable Model-agnostic Explanations) adds functionality to the proposed system by displaying important features contributing to each prediction. The framework upholds an accuracy of 99.2% besides the ability to provide understandable explanations for interpretation of model outputs. The performance combined with interpretability from the framework enables clinical practitioners to make decisions through an understanding of AI-generated outputs thereby reducing distrust in AI-driven healthcare.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Diagnostic Pathology
Diagnostic Pathology 医学-病理学
CiteScore
4.60
自引率
0.00%
发文量
93
审稿时长
1 months
期刊介绍: Diagnostic Pathology is an open access, peer-reviewed, online journal that considers research in surgical and clinical pathology, immunology, and biology, with a special focus on cutting-edge approaches in diagnostic pathology and tissue-based therapy. The journal covers all aspects of surgical pathology, including classic diagnostic pathology, prognosis-related diagnosis (tumor stages, prognosis markers, such as MIB-percentage, hormone receptors, etc.), and therapy-related findings. The journal also focuses on the technological aspects of pathology, including molecular biology techniques, morphometry aspects (stereology, DNA analysis, syntactic structure analysis), communication aspects (telecommunication, virtual microscopy, virtual pathology institutions, etc.), and electronic education and quality assurance (for example interactive publication, on-line references with automated updating, etc.).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信