{"title":"Let XAI generate reliability metadata, not medical explanations","authors":"Federico Cabitza , Enea Parimbelli","doi":"10.1016/j.cmpb.2025.109090","DOIUrl":null,"url":null,"abstract":"<div><div>As AI becomes increasingly embedded in medical practice, the call for explainability – commonly framed as eXplainable AI (XAI) – has grown, especially under regulatory pressures. However, conventional XAI approaches misunderstand clinical decision-making by focusing on post-hoc explanations rather than actionable cues. This letter argues that to calibrate trust in AI recommendations, physicians’ primary need is not for conventional post-hoc explanations, but for “<em>reliability metadata</em>”: a set of both marginal and instance-specific indicators that facilitate the assessment of the reliability of each individual advice given. We propose shifting the focus from generating static explanations to providing actionable cues – such as calibrated confidence scores, out-of-distribution alerts, and relevant reference cases – that support adaptive reliance and mitigate automation bias. By reframing XAI as <em>eXtended and eXplorable AI</em>, we emphasize interaction, uncertainty transparency, and clinical relevance over explanations per se. This perspective encourages AI design that aligns with real-world medical cognition, promotes reflective engagement, and supports safer, more effective decision-making.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"273 ","pages":"Article 109090"},"PeriodicalIF":4.8000,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer methods and programs in biomedicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169260725005073","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
As AI becomes increasingly embedded in medical practice, the call for explainability – commonly framed as eXplainable AI (XAI) – has grown, especially under regulatory pressures. However, conventional XAI approaches misunderstand clinical decision-making by focusing on post-hoc explanations rather than actionable cues. This letter argues that to calibrate trust in AI recommendations, physicians’ primary need is not for conventional post-hoc explanations, but for “reliability metadata”: a set of both marginal and instance-specific indicators that facilitate the assessment of the reliability of each individual advice given. We propose shifting the focus from generating static explanations to providing actionable cues – such as calibrated confidence scores, out-of-distribution alerts, and relevant reference cases – that support adaptive reliance and mitigate automation bias. By reframing XAI as eXtended and eXplorable AI, we emphasize interaction, uncertainty transparency, and clinical relevance over explanations per se. This perspective encourages AI design that aligns with real-world medical cognition, promotes reflective engagement, and supports safer, more effective decision-making.
期刊介绍:
To encourage the development of formal computing methods, and their application in biomedical research and medical practice, by illustration of fundamental principles in biomedical informatics research; to stimulate basic research into application software design; to report the state of research of biomedical information processing projects; to report new computer methodologies applied in biomedical areas; the eventual distribution of demonstrable software to avoid duplication of effort; to provide a forum for discussion and improvement of existing software; to optimize contact between national organizations and regional user groups by promoting an international exchange of information on formal methods, standards and software in biomedicine.
Computer Methods and Programs in Biomedicine covers computing methodology and software systems derived from computing science for implementation in all aspects of biomedical research and medical practice. It is designed to serve: biochemists; biologists; geneticists; immunologists; neuroscientists; pharmacologists; toxicologists; clinicians; epidemiologists; psychiatrists; psychologists; cardiologists; chemists; (radio)physicists; computer scientists; programmers and systems analysts; biomedical, clinical, electrical and other engineers; teachers of medical informatics and users of educational software.