{"title":"对Sun及其同事的回应:绘制生成人工智能增强临床文件的更简单的伦理景观","authors":"Richard C. Armitage","doi":"10.1111/jep.70232","DOIUrl":null,"url":null,"abstract":"<p>The recent article by Sun and colleagues sets out to identify key ethical considerations for the use of generative AI (artificial intelligence) in clinical documentation and offers recommendations to address these concerns [<span>1</span>]. Health equity considerations, the clinician–patient relationship, and algorithmic transparency and integrity are identified as key ethical considerations, and focusing on enhancing patient autonomy, ensuring accountability, and promoting health equity are suggested to mitigate these concerns.</p><p>At the outset of the article, the scope of the enquiry is set as focusing ‘on the use of generative AI chatbots for clinical documentation.’ It is clarified that generative AI-assisted clinical documentation refers to ‘the process of summarising patient interactions into encounter notes or handoff reports, drafting discharge or after-visit summaries, generating supporting documents for processes such as prior authorisations, and related documents, but without independently initiating clinical decisions’.</p><p>Accordingly, the use of generative AI for the purpose of clinical documentation production within this scope would only extend to the summarisation of two kinds of source material: first, clinician-written clinical notes, such as admission notes for the production of handoff reports or discharge summaries; second, patient–clinician verbal discussions, such as for the generation of after-visit summaries or prior authorisation documents.1 In both kinds, the source material is generated by the clinician alone or by the clinician and patient, and not by the generative AI. The generated documentation would be used by clinicians (such as the same doctor reading after-visit summaries at a later date, or other clinicians like hospital doctors reading handoff reports or primary care doctors reading discharge summaries), insurers (such as prior authorisation documents), and patients (such as lay language discharge summaries).</p><p>The article correctly identifies three relevant ethical considerations regarding the use of generative AI in broader clinical practice. However, these arise from the deployment of generative AI beyond the stated scope of the article. If, instead, the enquiry remains within its stated scope—‘the use of generative AI chatbots for clinical documentation’, which involves summarising clinician-written clinical notes or patient–clinician verbal discussions—the identified ethical considerations either do not arise or are easily mitigated by AI-generated clinical documents being reviewed, edited, and approved by the clinician using these tools. This shall now be explained in the same order as addressed in the article.</p><p>The author declares no conflicts of interest.</p>","PeriodicalId":15997,"journal":{"name":"Journal of evaluation in clinical practice","volume":"31 5","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jep.70232","citationCount":"0","resultStr":"{\"title\":\"Response to Sun and Colleagues: Charting a Simpler Ethical Landscape of Generative AI-Augmented Clinical Documentation\",\"authors\":\"Richard C. Armitage\",\"doi\":\"10.1111/jep.70232\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The recent article by Sun and colleagues sets out to identify key ethical considerations for the use of generative AI (artificial intelligence) in clinical documentation and offers recommendations to address these concerns [<span>1</span>]. Health equity considerations, the clinician–patient relationship, and algorithmic transparency and integrity are identified as key ethical considerations, and focusing on enhancing patient autonomy, ensuring accountability, and promoting health equity are suggested to mitigate these concerns.</p><p>At the outset of the article, the scope of the enquiry is set as focusing ‘on the use of generative AI chatbots for clinical documentation.’ It is clarified that generative AI-assisted clinical documentation refers to ‘the process of summarising patient interactions into encounter notes or handoff reports, drafting discharge or after-visit summaries, generating supporting documents for processes such as prior authorisations, and related documents, but without independently initiating clinical decisions’.</p><p>Accordingly, the use of generative AI for the purpose of clinical documentation production within this scope would only extend to the summarisation of two kinds of source material: first, clinician-written clinical notes, such as admission notes for the production of handoff reports or discharge summaries; second, patient–clinician verbal discussions, such as for the generation of after-visit summaries or prior authorisation documents.1 In both kinds, the source material is generated by the clinician alone or by the clinician and patient, and not by the generative AI. The generated documentation would be used by clinicians (such as the same doctor reading after-visit summaries at a later date, or other clinicians like hospital doctors reading handoff reports or primary care doctors reading discharge summaries), insurers (such as prior authorisation documents), and patients (such as lay language discharge summaries).</p><p>The article correctly identifies three relevant ethical considerations regarding the use of generative AI in broader clinical practice. However, these arise from the deployment of generative AI beyond the stated scope of the article. If, instead, the enquiry remains within its stated scope—‘the use of generative AI chatbots for clinical documentation’, which involves summarising clinician-written clinical notes or patient–clinician verbal discussions—the identified ethical considerations either do not arise or are easily mitigated by AI-generated clinical documents being reviewed, edited, and approved by the clinician using these tools. This shall now be explained in the same order as addressed in the article.</p><p>The author declares no conflicts of interest.</p>\",\"PeriodicalId\":15997,\"journal\":{\"name\":\"Journal of evaluation in clinical practice\",\"volume\":\"31 5\",\"pages\":\"\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-07-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jep.70232\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of evaluation in clinical practice\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/jep.70232\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of evaluation in clinical practice","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jep.70232","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Response to Sun and Colleagues: Charting a Simpler Ethical Landscape of Generative AI-Augmented Clinical Documentation
The recent article by Sun and colleagues sets out to identify key ethical considerations for the use of generative AI (artificial intelligence) in clinical documentation and offers recommendations to address these concerns [1]. Health equity considerations, the clinician–patient relationship, and algorithmic transparency and integrity are identified as key ethical considerations, and focusing on enhancing patient autonomy, ensuring accountability, and promoting health equity are suggested to mitigate these concerns.
At the outset of the article, the scope of the enquiry is set as focusing ‘on the use of generative AI chatbots for clinical documentation.’ It is clarified that generative AI-assisted clinical documentation refers to ‘the process of summarising patient interactions into encounter notes or handoff reports, drafting discharge or after-visit summaries, generating supporting documents for processes such as prior authorisations, and related documents, but without independently initiating clinical decisions’.
Accordingly, the use of generative AI for the purpose of clinical documentation production within this scope would only extend to the summarisation of two kinds of source material: first, clinician-written clinical notes, such as admission notes for the production of handoff reports or discharge summaries; second, patient–clinician verbal discussions, such as for the generation of after-visit summaries or prior authorisation documents.1 In both kinds, the source material is generated by the clinician alone or by the clinician and patient, and not by the generative AI. The generated documentation would be used by clinicians (such as the same doctor reading after-visit summaries at a later date, or other clinicians like hospital doctors reading handoff reports or primary care doctors reading discharge summaries), insurers (such as prior authorisation documents), and patients (such as lay language discharge summaries).
The article correctly identifies three relevant ethical considerations regarding the use of generative AI in broader clinical practice. However, these arise from the deployment of generative AI beyond the stated scope of the article. If, instead, the enquiry remains within its stated scope—‘the use of generative AI chatbots for clinical documentation’, which involves summarising clinician-written clinical notes or patient–clinician verbal discussions—the identified ethical considerations either do not arise or are easily mitigated by AI-generated clinical documents being reviewed, edited, and approved by the clinician using these tools. This shall now be explained in the same order as addressed in the article.
期刊介绍:
The Journal of Evaluation in Clinical Practice aims to promote the evaluation and development of clinical practice across medicine, nursing and the allied health professions. All aspects of health services research and public health policy analysis and debate are of interest to the Journal whether studied from a population-based or individual patient-centred perspective. Of particular interest to the Journal are submissions on all aspects of clinical effectiveness and efficiency including evidence-based medicine, clinical practice guidelines, clinical decision making, clinical services organisation, implementation and delivery, health economic evaluation, health process and outcome measurement and new or improved methods (conceptual and statistical) for systematic inquiry into clinical practice. Papers may take a classical quantitative or qualitative approach to investigation (or may utilise both techniques) or may take the form of learned essays, structured/systematic reviews and critiques.