对Sun及其同事的回应:绘制生成人工智能增强临床文件的更简单的伦理景观

IF 2.1 4区 医学 Q3 HEALTH CARE SCIENCES & SERVICES
Richard C. Armitage
{"title":"对Sun及其同事的回应:绘制生成人工智能增强临床文件的更简单的伦理景观","authors":"Richard C. Armitage","doi":"10.1111/jep.70232","DOIUrl":null,"url":null,"abstract":"<p>The recent article by Sun and colleagues sets out to identify key ethical considerations for the use of generative AI (artificial intelligence) in clinical documentation and offers recommendations to address these concerns [<span>1</span>]. Health equity considerations, the clinician–patient relationship, and algorithmic transparency and integrity are identified as key ethical considerations, and focusing on enhancing patient autonomy, ensuring accountability, and promoting health equity are suggested to mitigate these concerns.</p><p>At the outset of the article, the scope of the enquiry is set as focusing ‘on the use of generative AI chatbots for clinical documentation.’ It is clarified that generative AI-assisted clinical documentation refers to ‘the process of summarising patient interactions into encounter notes or handoff reports, drafting discharge or after-visit summaries, generating supporting documents for processes such as prior authorisations, and related documents, but without independently initiating clinical decisions’.</p><p>Accordingly, the use of generative AI for the purpose of clinical documentation production within this scope would only extend to the summarisation of two kinds of source material: first, clinician-written clinical notes, such as admission notes for the production of handoff reports or discharge summaries; second, patient–clinician verbal discussions, such as for the generation of after-visit summaries or prior authorisation documents.1 In both kinds, the source material is generated by the clinician alone or by the clinician and patient, and not by the generative AI. The generated documentation would be used by clinicians (such as the same doctor reading after-visit summaries at a later date, or other clinicians like hospital doctors reading handoff reports or primary care doctors reading discharge summaries), insurers (such as prior authorisation documents), and patients (such as lay language discharge summaries).</p><p>The article correctly identifies three relevant ethical considerations regarding the use of generative AI in broader clinical practice. However, these arise from the deployment of generative AI beyond the stated scope of the article. If, instead, the enquiry remains within its stated scope—‘the use of generative AI chatbots for clinical documentation’, which involves summarising clinician-written clinical notes or patient–clinician verbal discussions—the identified ethical considerations either do not arise or are easily mitigated by AI-generated clinical documents being reviewed, edited, and approved by the clinician using these tools. This shall now be explained in the same order as addressed in the article.</p><p>The author declares no conflicts of interest.</p>","PeriodicalId":15997,"journal":{"name":"Journal of evaluation in clinical practice","volume":"31 5","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jep.70232","citationCount":"0","resultStr":"{\"title\":\"Response to Sun and Colleagues: Charting a Simpler Ethical Landscape of Generative AI-Augmented Clinical Documentation\",\"authors\":\"Richard C. Armitage\",\"doi\":\"10.1111/jep.70232\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The recent article by Sun and colleagues sets out to identify key ethical considerations for the use of generative AI (artificial intelligence) in clinical documentation and offers recommendations to address these concerns [<span>1</span>]. Health equity considerations, the clinician–patient relationship, and algorithmic transparency and integrity are identified as key ethical considerations, and focusing on enhancing patient autonomy, ensuring accountability, and promoting health equity are suggested to mitigate these concerns.</p><p>At the outset of the article, the scope of the enquiry is set as focusing ‘on the use of generative AI chatbots for clinical documentation.’ It is clarified that generative AI-assisted clinical documentation refers to ‘the process of summarising patient interactions into encounter notes or handoff reports, drafting discharge or after-visit summaries, generating supporting documents for processes such as prior authorisations, and related documents, but without independently initiating clinical decisions’.</p><p>Accordingly, the use of generative AI for the purpose of clinical documentation production within this scope would only extend to the summarisation of two kinds of source material: first, clinician-written clinical notes, such as admission notes for the production of handoff reports or discharge summaries; second, patient–clinician verbal discussions, such as for the generation of after-visit summaries or prior authorisation documents.1 In both kinds, the source material is generated by the clinician alone or by the clinician and patient, and not by the generative AI. The generated documentation would be used by clinicians (such as the same doctor reading after-visit summaries at a later date, or other clinicians like hospital doctors reading handoff reports or primary care doctors reading discharge summaries), insurers (such as prior authorisation documents), and patients (such as lay language discharge summaries).</p><p>The article correctly identifies three relevant ethical considerations regarding the use of generative AI in broader clinical practice. However, these arise from the deployment of generative AI beyond the stated scope of the article. If, instead, the enquiry remains within its stated scope—‘the use of generative AI chatbots for clinical documentation’, which involves summarising clinician-written clinical notes or patient–clinician verbal discussions—the identified ethical considerations either do not arise or are easily mitigated by AI-generated clinical documents being reviewed, edited, and approved by the clinician using these tools. This shall now be explained in the same order as addressed in the article.</p><p>The author declares no conflicts of interest.</p>\",\"PeriodicalId\":15997,\"journal\":{\"name\":\"Journal of evaluation in clinical practice\",\"volume\":\"31 5\",\"pages\":\"\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-07-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jep.70232\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of evaluation in clinical practice\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jep.70232\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of evaluation in clinical practice","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jep.70232","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

Sun及其同事最近的一篇文章着手确定在临床文件中使用生成式人工智能(人工智能)的关键伦理考虑因素,并提出解决这些问题的建议。卫生公平考虑、医患关系以及算法的透明度和完整性被确定为关键的伦理考虑因素,建议将重点放在增强患者自主权、确保问责制和促进卫生公平上,以减轻这些问题。在文章的开头,调查的范围被设定为“专注于使用生成人工智能聊天机器人进行临床记录”。需要澄清的是,生成式人工智能辅助临床文件是指“将患者互动总结为就诊记录或移交报告,起草出院或随访摘要,为事先授权等过程生成支持文件和相关文件,但没有独立启动临床决策的过程”。因此,在此范围内,为临床文件制作目的而使用生成式人工智能只会扩展到两种源材料的摘要:第一,临床医生撰写的临床笔记,例如用于制作交接报告或出院摘要的入院笔记;第二,患者与临床医生的口头讨论,如生成访后总结或事先授权文件在这两种情况下,原始材料都是由临床医生单独或由临床医生和患者生成的,而不是由生成式人工智能生成的。生成的文档将被临床医生(例如同一名医生在稍后的日期阅读访问后摘要,或其他临床医生,如医院医生阅读移交报告或初级保健医生阅读出院摘要)、保险公司(例如事先授权文件)和患者(例如外行语言出院摘要)使用。文章正确地确定了在更广泛的临床实践中使用生成人工智能的三个相关伦理考虑。然而,这些都来自于本文所述范围之外的生成式AI的部署。相反,如果调查仍然在其规定的范围内——“使用生成式人工智能聊天机器人进行临床文档”,其中包括总结临床医生撰写的临床笔记或患者与临床医生的口头讨论——那么通过临床医生使用这些工具审查、编辑和批准人工智能生成的临床文档,所确定的伦理考虑要么不会出现,要么很容易减轻。现在将按本条所述的顺序加以解释。作者声明无利益冲突。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Response to Sun and Colleagues: Charting a Simpler Ethical Landscape of Generative AI-Augmented Clinical Documentation

The recent article by Sun and colleagues sets out to identify key ethical considerations for the use of generative AI (artificial intelligence) in clinical documentation and offers recommendations to address these concerns [1]. Health equity considerations, the clinician–patient relationship, and algorithmic transparency and integrity are identified as key ethical considerations, and focusing on enhancing patient autonomy, ensuring accountability, and promoting health equity are suggested to mitigate these concerns.

At the outset of the article, the scope of the enquiry is set as focusing ‘on the use of generative AI chatbots for clinical documentation.’ It is clarified that generative AI-assisted clinical documentation refers to ‘the process of summarising patient interactions into encounter notes or handoff reports, drafting discharge or after-visit summaries, generating supporting documents for processes such as prior authorisations, and related documents, but without independently initiating clinical decisions’.

Accordingly, the use of generative AI for the purpose of clinical documentation production within this scope would only extend to the summarisation of two kinds of source material: first, clinician-written clinical notes, such as admission notes for the production of handoff reports or discharge summaries; second, patient–clinician verbal discussions, such as for the generation of after-visit summaries or prior authorisation documents.1 In both kinds, the source material is generated by the clinician alone or by the clinician and patient, and not by the generative AI. The generated documentation would be used by clinicians (such as the same doctor reading after-visit summaries at a later date, or other clinicians like hospital doctors reading handoff reports or primary care doctors reading discharge summaries), insurers (such as prior authorisation documents), and patients (such as lay language discharge summaries).

The article correctly identifies three relevant ethical considerations regarding the use of generative AI in broader clinical practice. However, these arise from the deployment of generative AI beyond the stated scope of the article. If, instead, the enquiry remains within its stated scope—‘the use of generative AI chatbots for clinical documentation’, which involves summarising clinician-written clinical notes or patient–clinician verbal discussions—the identified ethical considerations either do not arise or are easily mitigated by AI-generated clinical documents being reviewed, edited, and approved by the clinician using these tools. This shall now be explained in the same order as addressed in the article.

The author declares no conflicts of interest.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.80
自引率
4.20%
发文量
143
审稿时长
3-8 weeks
期刊介绍: The Journal of Evaluation in Clinical Practice aims to promote the evaluation and development of clinical practice across medicine, nursing and the allied health professions. All aspects of health services research and public health policy analysis and debate are of interest to the Journal whether studied from a population-based or individual patient-centred perspective. Of particular interest to the Journal are submissions on all aspects of clinical effectiveness and efficiency including evidence-based medicine, clinical practice guidelines, clinical decision making, clinical services organisation, implementation and delivery, health economic evaluation, health process and outcome measurement and new or improved methods (conceptual and statistical) for systematic inquiry into clinical practice. Papers may take a classical quantitative or qualitative approach to investigation (or may utilise both techniques) or may take the form of learned essays, structured/systematic reviews and critiques.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信