{"title":"Reframing 'dehumanisation': AI and the reality of clinical communication.","authors":"Hazem Zohny","doi":"10.1136/jme-2025-111307","DOIUrl":null,"url":null,"abstract":"<p><p>Warnings that large language models (LLMs) could 'dehumanise' medical decision-making often rest on an asymmetrical comparison: the idealised, attentive healthcare provider versus a clumsy, early-stage artificial intelligence (AI). This framing ignores a more urgent reality: many patients face rushed, jargon-heavy, inconsistent communication, even from skilled professionals. This response to Hildebrand's critique argues that: (1) while he worries patients lose a safeguard against family pressure, in practice, time pressure, uncertainty and fragile dynamics often prevent clinician intervention. Because LLMs are continuously available to translate jargon and provide plain-language explanations in a patient's preferred language, they can reduce reliance on companions from the outset and be designed to flag coercive cues and invite confidential 'pause or reset' moments over time. (2) Appeals to implicit non-verbal cues as safeguards against paternalism misstate their value: when such cues contradict speech they commonly generate confusion and mistrust. By contrast, LLM communication is configurable; patients can make the level of guidance an explicit, revisable choice, enhancing autonomy. (3) Evidence that LLM responses are often rated more empathetic than clinicians disrupts the 'technical AI/empathic human' dichotomy. Moreover, clinical trust is multifaceted, frequently grounded in perceived competence and clarity, not (contra Hildebrand) shared vulnerability. Finally, because consent details are routinely forgotten, an on-demand explainer can improve comprehension and disclosure. While reliability, accountability and privacy remain decisive constraints, measured against real-world practice, careful LLM integration promises more equitable, patient-centred communication-not its erosion.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2025-111307","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
Warnings that large language models (LLMs) could 'dehumanise' medical decision-making often rest on an asymmetrical comparison: the idealised, attentive healthcare provider versus a clumsy, early-stage artificial intelligence (AI). This framing ignores a more urgent reality: many patients face rushed, jargon-heavy, inconsistent communication, even from skilled professionals. This response to Hildebrand's critique argues that: (1) while he worries patients lose a safeguard against family pressure, in practice, time pressure, uncertainty and fragile dynamics often prevent clinician intervention. Because LLMs are continuously available to translate jargon and provide plain-language explanations in a patient's preferred language, they can reduce reliance on companions from the outset and be designed to flag coercive cues and invite confidential 'pause or reset' moments over time. (2) Appeals to implicit non-verbal cues as safeguards against paternalism misstate their value: when such cues contradict speech they commonly generate confusion and mistrust. By contrast, LLM communication is configurable; patients can make the level of guidance an explicit, revisable choice, enhancing autonomy. (3) Evidence that LLM responses are often rated more empathetic than clinicians disrupts the 'technical AI/empathic human' dichotomy. Moreover, clinical trust is multifaceted, frequently grounded in perceived competence and clarity, not (contra Hildebrand) shared vulnerability. Finally, because consent details are routinely forgotten, an on-demand explainer can improve comprehension and disclosure. While reliability, accountability and privacy remain decisive constraints, measured against real-world practice, careful LLM integration promises more equitable, patient-centred communication-not its erosion.
期刊介绍:
Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients.
Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost.
JME is the official journal of the Institute of Medical Ethics.