Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review

Cancer Innovation Pub Date : 2024-07-03 DOI:10.1002/cai2.136
Amirehsan Ghasemi, Soheil Hashtarkhani, David L. Schwartz, Arash Shaban-Nejad
{"title":"Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review","authors":"Amirehsan Ghasemi,&nbsp;Soheil Hashtarkhani,&nbsp;David L. Schwartz,&nbsp;Arash Shaban-Nejad","doi":"10.1002/cai2.136","DOIUrl":null,"url":null,"abstract":"<p>With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.</p>","PeriodicalId":100212,"journal":{"name":"Cancer Innovation","volume":"3 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cai2.136","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cancer Innovation","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cai2.136","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.

Abstract Image

可解释人工智能在乳腺癌检测和风险预测中的应用:系统性范围审查
随着人工智能(AI)的发展,数据驱动算法在医疗领域越来越受欢迎。然而,由于许多此类算法的非线性和复杂行为,临床医生对此类算法的决策并不信任,认为这是一个黑箱过程。因此,科学界引入了可解释人工智能(XAI)来解决这一问题。本系统性范围综述调查了 XAI 在乳腺癌检测和风险预测中的应用。我们采用系统性检索策略,在 Scopus、IEEE Explore、PubMed 和 Google Scholar(前 50 篇引文)上进行了全面检索。搜索时间跨度为 2017 年 1 月至 2023 年 7 月,重点关注在乳腺癌数据集中采用 XAI 方法的同行评审研究。有 30 项研究符合我们的纳入标准并被纳入分析。结果显示,在乳腺癌研究中,SHapley Additive exPlanations(SHAP)在使用、解释模型预测结果、生物标记物的诊断和分类以及预后和生存分析方面是最重要的模型诊断 XAI 技术。此外,SHAP 模型主要解释了基于树的集合机器学习模型。最常见的原因是,SHAP 与模型无关,这使得它在解释任何模型预测结果时既受欢迎又有用。此外,它相对容易有效实现,完全适合性能良好的模型,如基于树的模型。可解释的人工智能提高了人工智能医疗系统和医疗设备的透明度、可解释性、公平性和可信度,并最终提高了医疗质量和结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
0.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信