Charting the ethical landscape of generative AI-augmented clinical documentation.

IF 3.3 2区 哲学 Q1 ETHICS
Qiwei Wilton Sun, Jennifer Miller, Sarah C Hull
{"title":"Charting the ethical landscape of generative AI-augmented clinical documentation.","authors":"Qiwei Wilton Sun, Jennifer Miller, Sarah C Hull","doi":"10.1136/jme-2024-110656","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence (AI) chatbots such as ChatGPT have several potential clinical applications, but their use for clinical documentation remains underexplored. AI-generated clinical documentation presents an appealing solution to administrative burden but raises new and old ethical concerns that may be overlooked. This article reviews the potential use of generative AI chatbots for purposes such as note-writing, handoffs, and prior authorisation letters, and the ethical considerations arising from their use in this context. AI-generated documentation may offer standardised and consistent documentation across encounters but may also embed biases that can spread across clinical teams relying on previous notes or handoffs, compromising clinical judgement, especially for vulnerable populations such as cognitively impaired or non-English-speaking patients. These tools may transform clinician-patient relationships by reducing administrative work and enhancing shared decision-making but may also compromise the emotional and moral elements of patient care. Moreover, the lack of algorithmic transparency raises concerns that may complicate the determination of responsibility when errors occur. To address these considerations, we propose notifying patients when the use of AI-generated clinical documentation meaningfully impacts their understanding of care, requiring clinician review of drafts, and clarifying areas of ambiguity to protect patient autonomy. Generative AI-specific legislation, error reporting databases and accountable measures for clinicians and AI developers can promote transparency. Equitable deployment requires careful procurement of training data representative of the populations served that incorporate social determinants while engaging stakeholders, ensuring cultural sensitivity in generated text, and enhancing medical education.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2024-110656","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Generative artificial intelligence (AI) chatbots such as ChatGPT have several potential clinical applications, but their use for clinical documentation remains underexplored. AI-generated clinical documentation presents an appealing solution to administrative burden but raises new and old ethical concerns that may be overlooked. This article reviews the potential use of generative AI chatbots for purposes such as note-writing, handoffs, and prior authorisation letters, and the ethical considerations arising from their use in this context. AI-generated documentation may offer standardised and consistent documentation across encounters but may also embed biases that can spread across clinical teams relying on previous notes or handoffs, compromising clinical judgement, especially for vulnerable populations such as cognitively impaired or non-English-speaking patients. These tools may transform clinician-patient relationships by reducing administrative work and enhancing shared decision-making but may also compromise the emotional and moral elements of patient care. Moreover, the lack of algorithmic transparency raises concerns that may complicate the determination of responsibility when errors occur. To address these considerations, we propose notifying patients when the use of AI-generated clinical documentation meaningfully impacts their understanding of care, requiring clinician review of drafts, and clarifying areas of ambiguity to protect patient autonomy. Generative AI-specific legislation, error reporting databases and accountable measures for clinicians and AI developers can promote transparency. Equitable deployment requires careful procurement of training data representative of the populations served that incorporate social determinants while engaging stakeholders, ensuring cultural sensitivity in generated text, and enhancing medical education.

绘制生成人工智能增强临床文件的伦理景观。
像ChatGPT这样的生成式人工智能(AI)聊天机器人有几种潜在的临床应用,但它们在临床文档中的应用仍未得到充分探索。人工智能生成的临床文件为减轻行政负担提供了一个有吸引力的解决方案,但也引发了可能被忽视的新老伦理问题。本文回顾了生成式人工智能聊天机器人的潜在用途,如笔记、移交和事先授权信,以及在这种情况下使用它们所引起的伦理考虑。人工智能生成的文档可能会在就诊过程中提供标准化和一致的文档,但也可能嵌入偏见,这些偏见可能会在依赖先前笔记或交接的临床团队中传播,从而影响临床判断,特别是对于认知障碍或不会说英语的患者等弱势群体。这些工具可能通过减少行政工作和加强共同决策来改变医患关系,但也可能损害患者护理的情感和道德因素。此外,缺乏算法透明度引起了人们的担忧,这可能会使错误发生时责任的确定复杂化。为了解决这些问题,我们建议在使用人工智能生成的临床文件有意义地影响患者对护理的理解时通知患者,要求临床医生审查草案,并澄清模棱两可的领域以保护患者的自主权。针对临床医生和人工智能开发人员的生成式人工智能立法、错误报告数据库和问责措施可以促进透明度。公平部署需要谨慎采购代表所服务人群的培训数据,在纳入社会决定因素的同时吸引利益攸关方参与,确保生成的文本具有文化敏感性,并加强医学教育。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Ethics
Journal of Medical Ethics 医学-医学:伦理
CiteScore
7.80
自引率
9.80%
发文量
164
审稿时长
4-8 weeks
期刊介绍: Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost. JME is the official journal of the Institute of Medical Ethics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信