人工智能为无行为能力患者的伦理决策提供支持:一项针对德国麻醉师和内科医生的调查。

IF 3 1区 哲学 Q1 ETHICS
Lasse Benzinger, Jelena Epping, Frank Ursin, Sabine Salloch
{"title":"人工智能为无行为能力患者的伦理决策提供支持:一项针对德国麻醉师和内科医生的调查。","authors":"Lasse Benzinger, Jelena Epping, Frank Ursin, Sabine Salloch","doi":"10.1186/s12910-024-01079-z","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients.</p><p><strong>Methods: </strong>A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who often encounter patients lacking decision-making capacity. The questionnaire covered attitudes toward AI-driven preference prediction, availability and utilization of Clinical Ethics Support Services (CESS), and experiences with ethically challenging situations. Descriptive statistics and bivariate analysis was performed. Qualitative responses were analyzed using content analysis in a mixed inductive-deductive approach.</p><p><strong>Results: </strong>Participants were predominantly male (69.3%), with ages ranging from 27 to 77. Most worked in nonacademic hospitals (82%). Physicians generally showed hesitance toward AI-driven preference prediction, citing concerns about the loss of individuality and humanity, lack of explicability in AI results, and doubts about AI's ability to encompass the ethical deliberation process. In contrast, physicians had a more positive opinion of CESS. Availability of CESS varied, with 81.8% of participants reporting access. Among those without access, 91.8% expressed a desire for CESS. Physicians' reluctance toward AI-driven preference prediction aligns with concerns about transparency, individuality, and human-machine interaction. While AI could enhance the accuracy of predictions and reduce surrogate burden, concerns about potential biases, de-humanisation, and lack of explicability persist.</p><p><strong>Conclusions: </strong>German physicians frequently encountering incapacitated patients exhibit hesitance toward AI-driven preference prediction but hold a higher esteem for CESS. Addressing concerns about individuality, explicability, and human-machine roles may facilitate the acceptance of AI in clinical ethics. Further research into patient and surrogate perspectives is needed to ensure AI aligns with patient preferences and values in complex medical decisions.</p>","PeriodicalId":55348,"journal":{"name":"BMC Medical Ethics","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11256615/pdf/","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.\",\"authors\":\"Lasse Benzinger, Jelena Epping, Frank Ursin, Sabine Salloch\",\"doi\":\"10.1186/s12910-024-01079-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients.</p><p><strong>Methods: </strong>A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who often encounter patients lacking decision-making capacity. The questionnaire covered attitudes toward AI-driven preference prediction, availability and utilization of Clinical Ethics Support Services (CESS), and experiences with ethically challenging situations. Descriptive statistics and bivariate analysis was performed. Qualitative responses were analyzed using content analysis in a mixed inductive-deductive approach.</p><p><strong>Results: </strong>Participants were predominantly male (69.3%), with ages ranging from 27 to 77. Most worked in nonacademic hospitals (82%). Physicians generally showed hesitance toward AI-driven preference prediction, citing concerns about the loss of individuality and humanity, lack of explicability in AI results, and doubts about AI's ability to encompass the ethical deliberation process. In contrast, physicians had a more positive opinion of CESS. Availability of CESS varied, with 81.8% of participants reporting access. Among those without access, 91.8% expressed a desire for CESS. Physicians' reluctance toward AI-driven preference prediction aligns with concerns about transparency, individuality, and human-machine interaction. While AI could enhance the accuracy of predictions and reduce surrogate burden, concerns about potential biases, de-humanisation, and lack of explicability persist.</p><p><strong>Conclusions: </strong>German physicians frequently encountering incapacitated patients exhibit hesitance toward AI-driven preference prediction but hold a higher esteem for CESS. Addressing concerns about individuality, explicability, and human-machine roles may facilitate the acceptance of AI in clinical ethics. Further research into patient and surrogate perspectives is needed to ensure AI aligns with patient preferences and values in complex medical decisions.</p>\",\"PeriodicalId\":55348,\"journal\":{\"name\":\"BMC Medical Ethics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11256615/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMC Medical Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1186/s12910-024-01079-z\",\"RegionNum\":1,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1186/s12910-024-01079-z","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

背景:人工智能(AI)已经彻底改变了各种医疗保健领域,在这些领域,人工智能算法有时甚至胜过人类专家。然而,临床伦理领域在很大程度上仍未受到人工智能进步的影响。本研究探讨了麻醉科医生和内科医生对使用人工智能驱动的偏好预测工具为无行为能力患者的伦理决策提供支持的态度:方法:编制了一份调查问卷,并在医学生中进行了预先测试。问卷发放给了 200 名德国麻醉师和 200 名德国内科医生,从而将重点放在了经常遇到缺乏决策能力患者的医生身上。问卷内容包括对人工智能驱动的偏好预测的态度、临床伦理支持服务(CESS)的可用性和利用率,以及应对伦理挑战情况的经验。对问卷进行了描述性统计和双变量分析。采用归纳-演绎混合方法对定性回答进行了内容分析:参与者主要为男性(69.3%),年龄从 27 岁到 77 岁不等。大多数人在非学术性医院工作(82%)。医生们普遍对人工智能驱动的偏好预测表现出犹豫不决的态度,理由是担心丧失个性和人性、人工智能结果缺乏可解释性,以及怀疑人工智能是否有能力涵盖伦理审议过程。相比之下,医生对 CESS 的看法更为积极。CESS的可用性各不相同,81.8%的参与者表示可以使用。在无法使用 CESS 的参与者中,91.8% 表示希望使用 CESS。医生不愿意接受人工智能驱动的偏好预测,这与他们对透明度、个性化和人机交互的担忧是一致的。虽然人工智能可以提高预测的准确性并减轻代理人的负担,但对潜在偏见、去人性化和缺乏可解释性的担忧依然存在:德国医生经常遇到丧失工作能力的病人,他们对人工智能驱动的偏好预测表现出犹豫不决,但对 CESS 则更为推崇。解决对个体性、可解释性和人机角色的担忧可能会促进人工智能在临床伦理学中被接受。要确保人工智能在复杂的医疗决策中符合患者的偏好和价值观,还需要进一步研究患者和代理人的观点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.

Background: Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients.

Methods: A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who often encounter patients lacking decision-making capacity. The questionnaire covered attitudes toward AI-driven preference prediction, availability and utilization of Clinical Ethics Support Services (CESS), and experiences with ethically challenging situations. Descriptive statistics and bivariate analysis was performed. Qualitative responses were analyzed using content analysis in a mixed inductive-deductive approach.

Results: Participants were predominantly male (69.3%), with ages ranging from 27 to 77. Most worked in nonacademic hospitals (82%). Physicians generally showed hesitance toward AI-driven preference prediction, citing concerns about the loss of individuality and humanity, lack of explicability in AI results, and doubts about AI's ability to encompass the ethical deliberation process. In contrast, physicians had a more positive opinion of CESS. Availability of CESS varied, with 81.8% of participants reporting access. Among those without access, 91.8% expressed a desire for CESS. Physicians' reluctance toward AI-driven preference prediction aligns with concerns about transparency, individuality, and human-machine interaction. While AI could enhance the accuracy of predictions and reduce surrogate burden, concerns about potential biases, de-humanisation, and lack of explicability persist.

Conclusions: German physicians frequently encountering incapacitated patients exhibit hesitance toward AI-driven preference prediction but hold a higher esteem for CESS. Addressing concerns about individuality, explicability, and human-machine roles may facilitate the acceptance of AI in clinical ethics. Further research into patient and surrogate perspectives is needed to ensure AI aligns with patient preferences and values in complex medical decisions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
BMC Medical Ethics
BMC Medical Ethics MEDICAL ETHICS-
CiteScore
5.20
自引率
7.40%
发文量
108
审稿时长
>12 weeks
期刊介绍: BMC Medical Ethics is an open access journal publishing original peer-reviewed research articles in relation to the ethical aspects of biomedical research and clinical practice, including professional choices and conduct, medical technologies, healthcare systems and health policies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信