AI and XAI second opinion: the danger of false confirmation in human-AI collaboration.

IF 3.3 2区 哲学 Q1 ETHICS
Rikard Rosenbacke, Åsa Melhus, Martin McKee, David Stuckler
{"title":"AI and XAI second opinion: the danger of false confirmation in human-AI collaboration.","authors":"Rikard Rosenbacke, Åsa Melhus, Martin McKee, David Stuckler","doi":"10.1136/jme-2024-110074","DOIUrl":null,"url":null,"abstract":"<p><p>Can AI substitute a human physician's second opinion? Recently the <i>Journal of Medical Ethics</i> published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician's while Jongsma and Sand argue for a second human opinion irrespective of AI's concurrence or dissent. The crux of this debate hinges on the prevalence and impact of 'false confirmation'-a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician-AI collaborations. We then review emerging evidence on the magnitude of false confirmation errors. Our simulations show that they are likely to be pervasive in clinical practice, decreasing diagnostic accuracy to between 5% and 30%. We conclude with a pragmatic approach to employing AI as a second opinion, emphasising the need for physicians to make clinical decisions before consulting AI; employing nudges to increase awareness of false confirmations and critically engaging with XAI explanations. This approach underscores the necessity for a cautious, evidence-based methodology when integrating AI into clinical decision-making.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2024-110074","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Can AI substitute a human physician's second opinion? Recently the Journal of Medical Ethics published two contrasting views: Kempt and Nagel advocate for using artificial intelligence (AI) for a second opinion except when its conclusions significantly diverge from the initial physician's while Jongsma and Sand argue for a second human opinion irrespective of AI's concurrence or dissent. The crux of this debate hinges on the prevalence and impact of 'false confirmation'-a scenario where AI erroneously validates an incorrect human decision. These errors seem exceedingly difficult to detect, reminiscent of heuristics akin to confirmation bias. However, this debate has yet to engage with the emergence of explainable AI (XAI), which elaborates on why the AI tool reaches its diagnosis. To progress this debate, we outline a framework for conceptualising decision-making errors in physician-AI collaborations. We then review emerging evidence on the magnitude of false confirmation errors. Our simulations show that they are likely to be pervasive in clinical practice, decreasing diagnostic accuracy to between 5% and 30%. We conclude with a pragmatic approach to employing AI as a second opinion, emphasising the need for physicians to make clinical decisions before consulting AI; employing nudges to increase awareness of false confirmations and critically engaging with XAI explanations. This approach underscores the necessity for a cautious, evidence-based methodology when integrating AI into clinical decision-making.

人工智能和 XAI 第二意见:人类与人工智能合作中错误确认的危险。
人工智能能否取代人类医生的第二意见?最近,《医学伦理学杂志》发表了两种截然不同的观点:坎普特(Kempt)和纳格尔(Nagel)主张使用人工智能(AI)作为第二意见,除非人工智能的结论与最初医生的结论有明显差异;而琼斯马(Jongsma)和桑德(Sand)则主张无论人工智能是否同意,都应使用第二人类意见。这场争论的关键在于 "错误确认 "的普遍性和影响--即人工智能错误地验证了人类的错误决定。这些错误似乎极难发现,让人联想到类似确认偏差的启发式思维。然而,这场辩论还没有涉及到可解释人工智能(XAI)的出现,它阐述了人工智能工具得出诊断结果的原因。为了推进这一讨论,我们概述了一个框架,用于概念化医生-人工智能合作中的决策错误。然后,我们回顾了有关错误确认错误程度的新证据。我们的模拟结果表明,误诊可能在临床实践中普遍存在,会将诊断准确率降低到 5% 到 30%。最后,我们提出了将人工智能作为第二意见的务实方法,强调医生在咨询人工智能之前需要做出临床决定;采用提示以提高对错误确认的认识,并批判性地参与 XAI 解释。这种方法强调,在将人工智能纳入临床决策时,必须采用谨慎、循证的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Ethics
Journal of Medical Ethics 医学-医学:伦理
CiteScore
7.80
自引率
9.80%
发文量
164
审稿时长
4-8 weeks
期刊介绍: Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost. JME is the official journal of the Institute of Medical Ethics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信