医学中可解释的人工智能:将人工智能融入未来临床常规的挑战。

IF 2.3
Frontiers in radiology Pub Date : 2025-08-05 eCollection Date: 2025-01-01 DOI:10.3389/fradi.2025.1627169
Tim Räz, Aurélie Pahud De Mortanges, Mauricio Reyes
{"title":"医学中可解释的人工智能:将人工智能融入未来临床常规的挑战。","authors":"Tim Räz, Aurélie Pahud De Mortanges, Mauricio Reyes","doi":"10.3389/fradi.2025.1627169","DOIUrl":null,"url":null,"abstract":"<p><p>Future AI systems may need to provide medical professionals with explanations of AI predictions and decisions. While current XAI methods match these requirements in principle, they are too inflexible and not sufficiently geared toward clinicians' needs to fulfill this role. This paper offers a conceptual roadmap for how XAI may be integrated into future medical practice. We identify three desiderata of increasing difficulty: First, explanations need to be provided in a context- and user-dependent manner. Second, explanations need to be created through a genuine dialogue between AI and human users. Third, AI systems need genuine social capabilities. We use an imaginary stroke treatment scenario as a foundation for our roadmap to explore how the three challenges emerge at different stages of clinical practice. We provide definitions of key concepts such as genuine dialogue and social capability, we discuss why these capabilities are desirable, and we identify major roadblocks. Our goal is to help practitioners and researchers in developing future XAI that is capable of operating as a participant in complex medical environments. We employ an interdisciplinary methodology that integrates medical XAI, medical practice, and philosophy.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1627169"},"PeriodicalIF":2.3000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12391920/pdf/","citationCount":"0","resultStr":"{\"title\":\"Explainable AI in medicine: challenges of integrating XAI into the future clinical routine.\",\"authors\":\"Tim Räz, Aurélie Pahud De Mortanges, Mauricio Reyes\",\"doi\":\"10.3389/fradi.2025.1627169\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Future AI systems may need to provide medical professionals with explanations of AI predictions and decisions. While current XAI methods match these requirements in principle, they are too inflexible and not sufficiently geared toward clinicians' needs to fulfill this role. This paper offers a conceptual roadmap for how XAI may be integrated into future medical practice. We identify three desiderata of increasing difficulty: First, explanations need to be provided in a context- and user-dependent manner. Second, explanations need to be created through a genuine dialogue between AI and human users. Third, AI systems need genuine social capabilities. We use an imaginary stroke treatment scenario as a foundation for our roadmap to explore how the three challenges emerge at different stages of clinical practice. We provide definitions of key concepts such as genuine dialogue and social capability, we discuss why these capabilities are desirable, and we identify major roadblocks. Our goal is to help practitioners and researchers in developing future XAI that is capable of operating as a participant in complex medical environments. We employ an interdisciplinary methodology that integrates medical XAI, medical practice, and philosophy.</p>\",\"PeriodicalId\":73101,\"journal\":{\"name\":\"Frontiers in radiology\",\"volume\":\"5 \",\"pages\":\"1627169\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12391920/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in radiology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fradi.2025.1627169\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in radiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fradi.2025.1627169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

未来的人工智能系统可能需要向医疗专业人员提供人工智能预测和决策的解释。虽然目前的XAI方法在原则上符合这些要求,但它们过于缺乏灵活性,不能充分满足临床医生的需求。本文为如何将XAI整合到未来的医疗实践中提供了一个概念性路线图。我们确定了难度越来越大的三个需求:首先,需要以上下文和用户依赖的方式提供解释。其次,解释需要通过人工智能和人类用户之间的真正对话来创造。第三,人工智能系统需要真正的社交能力。我们使用一个想象的中风治疗场景作为我们路线图的基础,探索这三个挑战在临床实践的不同阶段是如何出现的。我们提供了关键概念的定义,如真正的对话和社会能力,我们讨论了为什么这些能力是可取的,我们确定了主要的障碍。我们的目标是帮助从业者和研究人员开发能够在复杂的医疗环境中作为参与者操作的未来XAI。我们采用跨学科的方法,将医学XAI、医学实践和哲学相结合。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Explainable AI in medicine: challenges of integrating XAI into the future clinical routine.

Explainable AI in medicine: challenges of integrating XAI into the future clinical routine.

Explainable AI in medicine: challenges of integrating XAI into the future clinical routine.

Future AI systems may need to provide medical professionals with explanations of AI predictions and decisions. While current XAI methods match these requirements in principle, they are too inflexible and not sufficiently geared toward clinicians' needs to fulfill this role. This paper offers a conceptual roadmap for how XAI may be integrated into future medical practice. We identify three desiderata of increasing difficulty: First, explanations need to be provided in a context- and user-dependent manner. Second, explanations need to be created through a genuine dialogue between AI and human users. Third, AI systems need genuine social capabilities. We use an imaginary stroke treatment scenario as a foundation for our roadmap to explore how the three challenges emerge at different stages of clinical practice. We provide definitions of key concepts such as genuine dialogue and social capability, we discuss why these capabilities are desirable, and we identify major roadblocks. Our goal is to help practitioners and researchers in developing future XAI that is capable of operating as a participant in complex medical environments. We employ an interdisciplinary methodology that integrates medical XAI, medical practice, and philosophy.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信