Reframing 'dehumanisation': AI and the reality of clinical communication.

IF 3.4 2区 哲学 Q1 ETHICS
Hazem Zohny
{"title":"Reframing 'dehumanisation': AI and the reality of clinical communication.","authors":"Hazem Zohny","doi":"10.1136/jme-2025-111307","DOIUrl":null,"url":null,"abstract":"<p><p>Warnings that large language models (LLMs) could 'dehumanise' medical decision-making often rest on an asymmetrical comparison: the idealised, attentive healthcare provider versus a clumsy, early-stage artificial intelligence (AI). This framing ignores a more urgent reality: many patients face rushed, jargon-heavy, inconsistent communication, even from skilled professionals. This response to Hildebrand's critique argues that: (1) while he worries patients lose a safeguard against family pressure, in practice, time pressure, uncertainty and fragile dynamics often prevent clinician intervention. Because LLMs are continuously available to translate jargon and provide plain-language explanations in a patient's preferred language, they can reduce reliance on companions from the outset and be designed to flag coercive cues and invite confidential 'pause or reset' moments over time. (2) Appeals to implicit non-verbal cues as safeguards against paternalism misstate their value: when such cues contradict speech they commonly generate confusion and mistrust. By contrast, LLM communication is configurable; patients can make the level of guidance an explicit, revisable choice, enhancing autonomy. (3) Evidence that LLM responses are often rated more empathetic than clinicians disrupts the 'technical AI/empathic human' dichotomy. Moreover, clinical trust is multifaceted, frequently grounded in perceived competence and clarity, not (contra Hildebrand) shared vulnerability. Finally, because consent details are routinely forgotten, an on-demand explainer can improve comprehension and disclosure. While reliability, accountability and privacy remain decisive constraints, measured against real-world practice, careful LLM integration promises more equitable, patient-centred communication-not its erosion.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2025-111307","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Warnings that large language models (LLMs) could 'dehumanise' medical decision-making often rest on an asymmetrical comparison: the idealised, attentive healthcare provider versus a clumsy, early-stage artificial intelligence (AI). This framing ignores a more urgent reality: many patients face rushed, jargon-heavy, inconsistent communication, even from skilled professionals. This response to Hildebrand's critique argues that: (1) while he worries patients lose a safeguard against family pressure, in practice, time pressure, uncertainty and fragile dynamics often prevent clinician intervention. Because LLMs are continuously available to translate jargon and provide plain-language explanations in a patient's preferred language, they can reduce reliance on companions from the outset and be designed to flag coercive cues and invite confidential 'pause or reset' moments over time. (2) Appeals to implicit non-verbal cues as safeguards against paternalism misstate their value: when such cues contradict speech they commonly generate confusion and mistrust. By contrast, LLM communication is configurable; patients can make the level of guidance an explicit, revisable choice, enhancing autonomy. (3) Evidence that LLM responses are often rated more empathetic than clinicians disrupts the 'technical AI/empathic human' dichotomy. Moreover, clinical trust is multifaceted, frequently grounded in perceived competence and clarity, not (contra Hildebrand) shared vulnerability. Finally, because consent details are routinely forgotten, an on-demand explainer can improve comprehension and disclosure. While reliability, accountability and privacy remain decisive constraints, measured against real-world practice, careful LLM integration promises more equitable, patient-centred communication-not its erosion.

重构“非人性化”:人工智能和临床交流的现实。
大型语言模型(llm)可能会“非人化”医疗决策的警告,往往是基于一种不对称的比较:理想化的、细心的医疗服务提供者与笨拙的、早期的人工智能(AI)。这种框架忽略了一个更紧迫的现实:许多患者面临着匆忙、术语繁多、不一致的沟通,即使是来自熟练的专业人士。对希尔德布兰德批评的回应认为:(1)虽然他担心患者失去了抵御家庭压力的保障,但在实践中,时间压力、不确定性和脆弱的动态往往阻碍了临床医生的干预。由于法学硕士可以随时翻译专业术语,并以患者喜欢的语言提供简单的语言解释,因此他们可以从一开始就减少对同伴的依赖,并被设计为标记强制性提示,并随着时间的推移邀请保密的“暂停或重置”时刻。(2)将隐含的非语言线索作为防止家长作风的保护措施的呼吁错误地陈述了它们的价值:当这些线索与语言相矛盾时,它们通常会产生困惑和不信任。相比之下,LLM通信是可配置的;患者可以使指导水平成为明确的,可修改的选择,增强自主权。(3)法学硕士的反应通常被评为比临床医生更有同理心,这一证据打破了“技术人工智能/有同理心的人”的二分法。此外,临床信任是多方面的,经常建立在感知能力和清晰度的基础上,而不是(与希尔德布兰德相反)共同的脆弱性。最后,由于同意细节通常被遗忘,按需解释者可以提高理解和披露。虽然可靠性、问责制和隐私仍然是决定性的制约因素,但与现实世界的实践相比,谨慎的法学硕士整合承诺更公平、以病人为中心的交流——而不是对其的侵蚀。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Ethics
Journal of Medical Ethics 医学-医学:伦理
CiteScore
7.80
自引率
9.80%
发文量
164
审稿时长
4-8 weeks
期刊介绍: Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost. JME is the official journal of the Institute of Medical Ethics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信