让XAI生成可靠性元数据,而不是医学解释

IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Federico Cabitza , Enea Parimbelli
{"title":"让XAI生成可靠性元数据,而不是医学解释","authors":"Federico Cabitza ,&nbsp;Enea Parimbelli","doi":"10.1016/j.cmpb.2025.109090","DOIUrl":null,"url":null,"abstract":"<div><div>As AI becomes increasingly embedded in medical practice, the call for explainability – commonly framed as eXplainable AI (XAI) – has grown, especially under regulatory pressures. However, conventional XAI approaches misunderstand clinical decision-making by focusing on post-hoc explanations rather than actionable cues. This letter argues that to calibrate trust in AI recommendations, physicians’ primary need is not for conventional post-hoc explanations, but for “<em>reliability metadata</em>”: a set of both marginal and instance-specific indicators that facilitate the assessment of the reliability of each individual advice given. We propose shifting the focus from generating static explanations to providing actionable cues – such as calibrated confidence scores, out-of-distribution alerts, and relevant reference cases – that support adaptive reliance and mitigate automation bias. By reframing XAI as <em>eXtended and eXplorable AI</em>, we emphasize interaction, uncertainty transparency, and clinical relevance over explanations per se. This perspective encourages AI design that aligns with real-world medical cognition, promotes reflective engagement, and supports safer, more effective decision-making.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"273 ","pages":"Article 109090"},"PeriodicalIF":4.8000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Let XAI generate reliability metadata, not medical explanations\",\"authors\":\"Federico Cabitza ,&nbsp;Enea Parimbelli\",\"doi\":\"10.1016/j.cmpb.2025.109090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As AI becomes increasingly embedded in medical practice, the call for explainability – commonly framed as eXplainable AI (XAI) – has grown, especially under regulatory pressures. However, conventional XAI approaches misunderstand clinical decision-making by focusing on post-hoc explanations rather than actionable cues. This letter argues that to calibrate trust in AI recommendations, physicians’ primary need is not for conventional post-hoc explanations, but for “<em>reliability metadata</em>”: a set of both marginal and instance-specific indicators that facilitate the assessment of the reliability of each individual advice given. We propose shifting the focus from generating static explanations to providing actionable cues – such as calibrated confidence scores, out-of-distribution alerts, and relevant reference cases – that support adaptive reliance and mitigate automation bias. By reframing XAI as <em>eXtended and eXplorable AI</em>, we emphasize interaction, uncertainty transparency, and clinical relevance over explanations per se. This perspective encourages AI design that aligns with real-world medical cognition, promotes reflective engagement, and supports safer, more effective decision-making.</div></div>\",\"PeriodicalId\":10624,\"journal\":{\"name\":\"Computer methods and programs in biomedicine\",\"volume\":\"273 \",\"pages\":\"Article 109090\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2025-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer methods and programs in biomedicine\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0169260725005073\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer methods and programs in biomedicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169260725005073","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

随着人工智能越来越多地融入医疗实践,对可解释性的呼吁——通常被称为可解释的人工智能(XAI)——已经增长,特别是在监管压力下。然而,传统的XAI方法通过关注事后解释而不是可操作的线索来误解临床决策。这封信认为,为了校准对人工智能建议的信任,医生的主要需求不是传统的事后解释,而是“可靠性元数据”:一组边缘和特定实例的指标,有助于评估所给出的每个建议的可靠性。我们建议将重点从生成静态解释转移到提供可操作的线索——例如校准的置信度评分、分布外警报和相关的参考案例——以支持自适应依赖和减轻自动化偏差。通过将XAI重新定义为可扩展和可探索的AI,我们强调交互、不确定性透明度和临床相关性,而不是解释本身。这一观点鼓励人工智能设计与现实世界的医学认知保持一致,促进反思参与,并支持更安全、更有效的决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Let XAI generate reliability metadata, not medical explanations
As AI becomes increasingly embedded in medical practice, the call for explainability – commonly framed as eXplainable AI (XAI) – has grown, especially under regulatory pressures. However, conventional XAI approaches misunderstand clinical decision-making by focusing on post-hoc explanations rather than actionable cues. This letter argues that to calibrate trust in AI recommendations, physicians’ primary need is not for conventional post-hoc explanations, but for “reliability metadata”: a set of both marginal and instance-specific indicators that facilitate the assessment of the reliability of each individual advice given. We propose shifting the focus from generating static explanations to providing actionable cues – such as calibrated confidence scores, out-of-distribution alerts, and relevant reference cases – that support adaptive reliance and mitigate automation bias. By reframing XAI as eXtended and eXplorable AI, we emphasize interaction, uncertainty transparency, and clinical relevance over explanations per se. This perspective encourages AI design that aligns with real-world medical cognition, promotes reflective engagement, and supports safer, more effective decision-making.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer methods and programs in biomedicine
Computer methods and programs in biomedicine 工程技术-工程:生物医学
CiteScore
12.30
自引率
6.60%
发文量
601
审稿时长
135 days
期刊介绍: To encourage the development of formal computing methods, and their application in biomedical research and medical practice, by illustration of fundamental principles in biomedical informatics research; to stimulate basic research into application software design; to report the state of research of biomedical information processing projects; to report new computer methodologies applied in biomedical areas; the eventual distribution of demonstrable software to avoid duplication of effort; to provide a forum for discussion and improvement of existing software; to optimize contact between national organizations and regional user groups by promoting an international exchange of information on formal methods, standards and software in biomedicine. Computer Methods and Programs in Biomedicine covers computing methodology and software systems derived from computing science for implementation in all aspects of biomedical research and medical practice. It is designed to serve: biochemists; biologists; geneticists; immunologists; neuroscientists; pharmacologists; toxicologists; clinicians; epidemiologists; psychiatrists; psychologists; cardiologists; chemists; (radio)physicists; computer scientists; programmers and systems analysts; biomedical, clinical, electrical and other engineers; teachers of medical informatics and users of educational software.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信