可解释的人工智能和医学中的利害关系:一项用户研究

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Sam Baron , Andrew J. Latham , Somogy Varga
{"title":"可解释的人工智能和医学中的利害关系:一项用户研究","authors":"Sam Baron ,&nbsp;Andrew J. Latham ,&nbsp;Somogy Varga","doi":"10.1016/j.artint.2025.104282","DOIUrl":null,"url":null,"abstract":"<div><div>The apparent downsides of opaque algorithms have led to a demand for explainable AI (XAI) methods by which a user might come to understand why an algorithm produced the particular output it did, given its inputs. Patients, for example, might find that the lack of explanation of the process underlying the algorithmic recommendations for diagnosis and treatment hinders their ability to provide informed consent. This paper examines the impact of two factors on user perceptions of explanations for AI systems in medical contexts. The factors considered were the <em>stakes</em> of the decision—high versus low—and the decision source—human versus AI. 484 participants were presented with vignettes in which medical diagnosis and treatment plan recommendations were made by humans or by AI. Separate vignettes were used for <em>high stakes</em> scenarios involving life-threatening diseases, and <em>low stakes</em> scenarios involving mild diseases. In each vignette, an explanation for the decision was given. Four explanation types were tested across separate vignettes: no explanation, counterfactual, causal and a novel ‘narrative-based’ explanation, not previously considered. This yielded a total of 16 conditions, of which each participant saw only one. Individuals were asked to evaluate the explanations they received based on helpfulness, understanding, consent, reliability, trust, interests and likelihood of undergoing treatment. We observed a main effect for stakes on all factors and a main effect for decision source on all factors except for helpfulness and likelihood to undergo treatment. While we observed effects for explanation on helpfulness, understanding, consent, reliability, trust and interests, we by and large did not see any differences between the effects of explanation types. This suggests that the effectiveness of explanations may not depend on type of explanation but instead, on the stakes and decision source.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"340 ","pages":"Article 104282"},"PeriodicalIF":5.1000,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Explainable AI and stakes in medicine: A user study\",\"authors\":\"Sam Baron ,&nbsp;Andrew J. Latham ,&nbsp;Somogy Varga\",\"doi\":\"10.1016/j.artint.2025.104282\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The apparent downsides of opaque algorithms have led to a demand for explainable AI (XAI) methods by which a user might come to understand why an algorithm produced the particular output it did, given its inputs. Patients, for example, might find that the lack of explanation of the process underlying the algorithmic recommendations for diagnosis and treatment hinders their ability to provide informed consent. This paper examines the impact of two factors on user perceptions of explanations for AI systems in medical contexts. The factors considered were the <em>stakes</em> of the decision—high versus low—and the decision source—human versus AI. 484 participants were presented with vignettes in which medical diagnosis and treatment plan recommendations were made by humans or by AI. Separate vignettes were used for <em>high stakes</em> scenarios involving life-threatening diseases, and <em>low stakes</em> scenarios involving mild diseases. In each vignette, an explanation for the decision was given. Four explanation types were tested across separate vignettes: no explanation, counterfactual, causal and a novel ‘narrative-based’ explanation, not previously considered. This yielded a total of 16 conditions, of which each participant saw only one. Individuals were asked to evaluate the explanations they received based on helpfulness, understanding, consent, reliability, trust, interests and likelihood of undergoing treatment. We observed a main effect for stakes on all factors and a main effect for decision source on all factors except for helpfulness and likelihood to undergo treatment. While we observed effects for explanation on helpfulness, understanding, consent, reliability, trust and interests, we by and large did not see any differences between the effects of explanation types. This suggests that the effectiveness of explanations may not depend on type of explanation but instead, on the stakes and decision source.</div></div>\",\"PeriodicalId\":8434,\"journal\":{\"name\":\"Artificial Intelligence\",\"volume\":\"340 \",\"pages\":\"Article 104282\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-01-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0004370225000013\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370225000013","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

不透明算法的明显缺点导致了对可解释AI (XAI)方法的需求,通过这种方法,用户可能会理解为什么算法会产生特定的输出,给定它的输入。例如,患者可能会发现,缺乏对诊断和治疗算法建议背后的过程的解释,阻碍了他们提供知情同意的能力。本文研究了两个因素对医疗环境中人工智能系统解释的用户感知的影响。考虑的因素是决策的利害关系——高与低,以及决策来源——人类与人工智能。向484名参与者展示了由人类或人工智能提出的医疗诊断和治疗计划建议的小片段。单独的小插曲被用于高风险的场景,包括危及生命的疾病,低风险的场景,包括轻微的疾病。在每个小插图中,都给出了对该决定的解释。四种解释类型在不同的小片段中进行了测试:无解释、反事实、因果关系和一种以前没有考虑过的新颖的“基于叙事的”解释。这总共产生了16种情况,每个参与者只能看到一种情况。个人被要求根据帮助、理解、同意、可靠性、信任、兴趣和接受治疗的可能性来评估他们收到的解释。我们观察到,除了乐于助人和接受治疗的可能性外,利害关系对所有因素都有主要影响,决策源对所有因素都有主要影响。虽然我们观察到解释对帮助、理解、同意、可靠性、信任和兴趣的影响,但总的来说,我们没有看到解释类型之间的影响有任何差异。这表明,解释的有效性可能不取决于解释的类型,而是取决于利害关系和决策来源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Explainable AI and stakes in medicine: A user study
The apparent downsides of opaque algorithms have led to a demand for explainable AI (XAI) methods by which a user might come to understand why an algorithm produced the particular output it did, given its inputs. Patients, for example, might find that the lack of explanation of the process underlying the algorithmic recommendations for diagnosis and treatment hinders their ability to provide informed consent. This paper examines the impact of two factors on user perceptions of explanations for AI systems in medical contexts. The factors considered were the stakes of the decision—high versus low—and the decision source—human versus AI. 484 participants were presented with vignettes in which medical diagnosis and treatment plan recommendations were made by humans or by AI. Separate vignettes were used for high stakes scenarios involving life-threatening diseases, and low stakes scenarios involving mild diseases. In each vignette, an explanation for the decision was given. Four explanation types were tested across separate vignettes: no explanation, counterfactual, causal and a novel ‘narrative-based’ explanation, not previously considered. This yielded a total of 16 conditions, of which each participant saw only one. Individuals were asked to evaluate the explanations they received based on helpfulness, understanding, consent, reliability, trust, interests and likelihood of undergoing treatment. We observed a main effect for stakes on all factors and a main effect for decision source on all factors except for helpfulness and likelihood to undergo treatment. While we observed effects for explanation on helpfulness, understanding, consent, reliability, trust and interests, we by and large did not see any differences between the effects of explanation types. This suggests that the effectiveness of explanations may not depend on type of explanation but instead, on the stakes and decision source.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Artificial Intelligence
Artificial Intelligence 工程技术-计算机:人工智能
CiteScore
11.20
自引率
1.40%
发文量
118
审稿时长
8 months
期刊介绍: The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信