Explainable AI: definition and attributes of a good explanation for health AI

Evangelia Kyrimi, Scott McLachlan, Jared M. Wohlgemut, Zane B. Perkins, David A. Lagnado, William Marsh, the ExAIDSS Expert Group
{"title":"Explainable AI: definition and attributes of a good explanation for health AI","authors":"Evangelia Kyrimi,&nbsp;Scott McLachlan,&nbsp;Jared M. Wohlgemut,&nbsp;Zane B. Perkins,&nbsp;David A. Lagnado,&nbsp;William Marsh,&nbsp;the ExAIDSS Expert Group","doi":"10.1007/s43681-025-00668-x","DOIUrl":null,"url":null,"abstract":"<div><p>Proposals of artificial intelligence (AI) solutions based on more complex and accurate predictive models are becoming ubiquitous across many disciplines. As the complexity of these models increases, there is a tendency for transparency and users’ understanding to decrease. This means accurate prediction alone is insufficient to make an AI-based solution truly useful. For the development of healthcare systems, this raises new issues for accountability and safety. How and why an AI system made a recommendation may necessitate complex explanations of the inner workings and reasoning processes. While research on explainable AI (XAI) has grown significantly in recent years, and the demand for XAI in medicine is high, determining what constitutes a good explanation is ad hoc and providing adequate explanations remains a challenge. To realise the potential of AI, it is critical to shed light on two fundamental questions of explanation for safety–critical AI such as health-AI that remain unanswered: (1) What is an explanation in health-AI? And (2) What are the attributes of a good explanation in health-AI? In this study and possibly for the first time we studied published literature, and expert opinions from a diverse group of professionals reported from a two-round Delphi study. The research outputs include (1) a proposed definition of explanation in health-AI, and (2) a comprehensive set of attributes that characterize a good explanation in health-AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3883 - 3896"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00668-x.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00668-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Proposals of artificial intelligence (AI) solutions based on more complex and accurate predictive models are becoming ubiquitous across many disciplines. As the complexity of these models increases, there is a tendency for transparency and users’ understanding to decrease. This means accurate prediction alone is insufficient to make an AI-based solution truly useful. For the development of healthcare systems, this raises new issues for accountability and safety. How and why an AI system made a recommendation may necessitate complex explanations of the inner workings and reasoning processes. While research on explainable AI (XAI) has grown significantly in recent years, and the demand for XAI in medicine is high, determining what constitutes a good explanation is ad hoc and providing adequate explanations remains a challenge. To realise the potential of AI, it is critical to shed light on two fundamental questions of explanation for safety–critical AI such as health-AI that remain unanswered: (1) What is an explanation in health-AI? And (2) What are the attributes of a good explanation in health-AI? In this study and possibly for the first time we studied published literature, and expert opinions from a diverse group of professionals reported from a two-round Delphi study. The research outputs include (1) a proposed definition of explanation in health-AI, and (2) a comprehensive set of attributes that characterize a good explanation in health-AI.

可解释的AI:健康AI的定义和属性
基于更复杂和准确的预测模型的人工智能解决方案在许多学科中变得无处不在。随着这些模型的复杂性的增加,透明度和用户的理解有减少的趋势。这意味着仅靠准确的预测不足以使基于人工智能的解决方案真正有用。对于医疗保健系统的发展,这提出了问责制和安全性的新问题。人工智能系统如何以及为什么做出建议,可能需要对其内部工作原理和推理过程进行复杂的解释。虽然近年来对可解释人工智能(XAI)的研究有了显著增长,医学对可解释人工智能的需求也很高,但确定什么是一个好的解释是临时的,提供足够的解释仍然是一个挑战。为了实现人工智能的潜力,关键是要阐明对安全至关重要的人工智能(如健康人工智能)的两个基本问题的解释,这些问题仍未得到解答:(1)健康人工智能的解释是什么?(2)在健康人工智能中,一个好的解释有哪些属性?在这项研究中,我们可能是第一次研究已发表的文献,以及来自不同专业人士的专家意见,这些专家意见来自两轮德尔菲研究报告。研究成果包括(1)health-AI中解释的拟议定义,以及(2)health-AI中表征良好解释的综合属性集。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信