Clouds on the horizon: clinical decision support systems, the control problem, and physician-patient dialogue.

IF 2.3 2区 哲学 Q1 ETHICS
Mahmut Alpertunga Kara
{"title":"Clouds on the horizon: clinical decision support systems, the control problem, and physician-patient dialogue.","authors":"Mahmut Alpertunga Kara","doi":"10.1007/s11019-024-10241-8","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence-based clinical decision support systems have a potential to improve clinical practice, but they may have a negative impact on the physician-patient dialogue, because of the control problem. Physician-patient dialogue depends on human qualities such as compassion, trust, and empathy, which are shared by both parties. These qualities are necessary for the parties to reach a shared understanding -the merging of horizons- about clinical decisions. The patient attends the clinical encounter not only with a malfunctioning body, but also with an 'unhomelike' experience of illness that is related to a world of values and meanings, a life-world. Making wise individual decisions in accordance with the patient's life-world requires not only scientific analysis of causal relationships, but also listening with empathy to the patient's concerns. For a decision to be made, clinical information should be interpreted considering the patient's life-world. This side of clinical practice is not a job for computers, and they cannot be final decision-makers. On the other hand, in the control problem users blindly accept system output because of over-reliance, rather than evaluating it with their own judgement. This means over-reliant parties leave their place in the dialogue to the system. In this case, the dialogue may be disrupted and mutual trust may be lost. Therefore, it is necessary to design decision support systems to avoid the control problem and to limit their use when this is not possible, in order to protect the physician-patient dialogue.</p>","PeriodicalId":47449,"journal":{"name":"Medicine Health Care and Philosophy","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medicine Health Care and Philosophy","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11019-024-10241-8","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence-based clinical decision support systems have a potential to improve clinical practice, but they may have a negative impact on the physician-patient dialogue, because of the control problem. Physician-patient dialogue depends on human qualities such as compassion, trust, and empathy, which are shared by both parties. These qualities are necessary for the parties to reach a shared understanding -the merging of horizons- about clinical decisions. The patient attends the clinical encounter not only with a malfunctioning body, but also with an 'unhomelike' experience of illness that is related to a world of values and meanings, a life-world. Making wise individual decisions in accordance with the patient's life-world requires not only scientific analysis of causal relationships, but also listening with empathy to the patient's concerns. For a decision to be made, clinical information should be interpreted considering the patient's life-world. This side of clinical practice is not a job for computers, and they cannot be final decision-makers. On the other hand, in the control problem users blindly accept system output because of over-reliance, rather than evaluating it with their own judgement. This means over-reliant parties leave their place in the dialogue to the system. In this case, the dialogue may be disrupted and mutual trust may be lost. Therefore, it is necessary to design decision support systems to avoid the control problem and to limit their use when this is not possible, in order to protect the physician-patient dialogue.

地平线上的乌云:临床决策支持系统、控制问题和医患对话。
基于人工智能的临床决策支持系统具有改善临床实践的潜力,但由于控制问题,它们可能对医患对话产生负面影响。医患对话依赖于人类的品质,如同情、信任和同理心,这些都是双方共有的。这些品质对于各方就临床决策达成共同理解——视野的融合——是必要的。患者参加临床治疗,不仅是一个功能失调的身体,而且还有一种“不像家一样”的疾病体验,这种体验与一个价值和意义的世界、一个生活世界有关。根据患者的生活世界做出明智的个人决定,不仅需要对因果关系进行科学分析,还需要同情地倾听患者的担忧。为了做出决定,临床信息应该考虑到患者的生活世界来解释。临床实践的这一方面不是计算机的工作,它们不能成为最终决策者。另一方面,在控制问题中,用户由于过度依赖而盲目接受系统输出,而不是用自己的判断对其进行评价。这意味着过度依赖的各方将自己在对话中的位置留给了系统。在这种情况下,对话可能会中断,互信可能会丧失。因此,有必要设计决策支持系统来避免控制问题,并在不可能的情况下限制其使用,以保护医患对话。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.30
自引率
4.80%
发文量
64
期刊介绍: Medicine, Health Care and Philosophy: A European Journal is the official journal of the European Society for Philosophy of Medicine and Health Care. It provides a forum for international exchange of research data, theories, reports and opinions in bioethics and philosophy of medicine. The journal promotes interdisciplinary studies, and stimulates philosophical analysis centered on a common object of reflection: health care, the human effort to deal with disease, illness, death as well as health, well-being and life. Particular attention is paid to developing contributions from all European countries, and to making accessible scientific work and reports on the practice of health care ethics, from all nations, cultures and language areas in Europe.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信