利用大型语言模型简化医疗记录生成:对医疗保健信息学的影响。

IF 2.2 2区 医学 Q4 MEDICAL INFORMATICS
Yi-Ling Chiang, Kuei-Fen Yang, Pin-Chih Su, Shang-Feng Tsai, Kai-Li Liang
{"title":"利用大型语言模型简化医疗记录生成:对医疗保健信息学的影响。","authors":"Yi-Ling Chiang, Kuei-Fen Yang, Pin-Chih Su, Shang-Feng Tsai, Kai-Li Liang","doi":"10.1055/a-2707-2959","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to leverage a Large Language Model (LLM) to improve the efficiency and thoroughness of medical record documentation. This study focused on aiding clinical staff in creating structured summaries with the help of an LLM and assessing the quality of these AI-proposed records in comparison to those produced by doctors.</p><p><strong>Methods: </strong>This strategy involved assembling a team of specialists, including data engineers, physicians, and medical information experts, to develop guidelines for medical summaries produced by an LLM (Llama 3.1), all under the direction of policymakers at the study hospital. The LLM proposes admission, weekly summaries, and discharge notes for physicians to review and edit. A validated Physician Documentation Quality Instrument (PDQI-9) was used to compare the quality of physician-authored and LLM-generated medical records.</p><p><strong>Results: </strong>The results showed no significant difference was observed in the total PDQI-9 scores between the physician-drafted and AI-created weekly summaries and discharge notes (P = 0.129 and 0.873, respectively). However, there was a significant difference in the total PDQI-9 scores between the physician and AI admission notes (P = 0.004). Furthermore, there were significant differences in item levels between physicians' and AI notes. After deploying the note-assisted function in our hospital, it gradually gained popularity.</p><p><strong>Conclusions: </strong>LLM shows considerable promise for enhancing the efficiency and quality of medical record summaries. For the successful integration of LLM-assisted documentation, regular quality assessments, continuous support, and training are essential. Implementing LLMs can allow clinical staff to concentrate on more valuable tasks, potentially enhancing overall healthcare delivery.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Leveraging a Large Language Model for Streamlined Medical Record Generation: Implications for Healthcare Informatics.\",\"authors\":\"Yi-Ling Chiang, Kuei-Fen Yang, Pin-Chih Su, Shang-Feng Tsai, Kai-Li Liang\",\"doi\":\"10.1055/a-2707-2959\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>This study aimed to leverage a Large Language Model (LLM) to improve the efficiency and thoroughness of medical record documentation. This study focused on aiding clinical staff in creating structured summaries with the help of an LLM and assessing the quality of these AI-proposed records in comparison to those produced by doctors.</p><p><strong>Methods: </strong>This strategy involved assembling a team of specialists, including data engineers, physicians, and medical information experts, to develop guidelines for medical summaries produced by an LLM (Llama 3.1), all under the direction of policymakers at the study hospital. The LLM proposes admission, weekly summaries, and discharge notes for physicians to review and edit. A validated Physician Documentation Quality Instrument (PDQI-9) was used to compare the quality of physician-authored and LLM-generated medical records.</p><p><strong>Results: </strong>The results showed no significant difference was observed in the total PDQI-9 scores between the physician-drafted and AI-created weekly summaries and discharge notes (P = 0.129 and 0.873, respectively). However, there was a significant difference in the total PDQI-9 scores between the physician and AI admission notes (P = 0.004). Furthermore, there were significant differences in item levels between physicians' and AI notes. After deploying the note-assisted function in our hospital, it gradually gained popularity.</p><p><strong>Conclusions: </strong>LLM shows considerable promise for enhancing the efficiency and quality of medical record summaries. For the successful integration of LLM-assisted documentation, regular quality assessments, continuous support, and training are essential. Implementing LLMs can allow clinical staff to concentrate on more valuable tasks, potentially enhancing overall healthcare delivery.</p>\",\"PeriodicalId\":48956,\"journal\":{\"name\":\"Applied Clinical Informatics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Clinical Informatics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1055/a-2707-2959\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"MEDICAL INFORMATICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Clinical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-2707-2959","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0

摘要

目的:本研究旨在利用大语言模型(LLM)来提高病历记录的效率和彻底性。这项研究的重点是帮助临床工作人员在法学硕士的帮助下创建结构化摘要,并将这些人工智能提出的记录的质量与医生产生的记录进行比较。方法:该策略包括组建一个专家团队,包括数据工程师、医生和医学信息专家,在研究医院决策者的指导下,为法学硕士(Llama 3.1)制作的医学摘要制定指南。法学硕士建议住院、每周总结和出院记录供医生审查和编辑。使用经过验证的医师文档质量仪器(PDQI-9)来比较医生撰写的医疗记录和llm生成的医疗记录的质量。结果:结果显示,医生起草的每周总结和出院记录与人工智能创建的总PDQI-9评分无显著差异(P分别= 0.129和0.873)。然而,医生和人工智能住院笔记之间的总PDQI-9评分有显著差异(P = 0.004)。此外,医生笔记和人工智能笔记在项目水平上存在显著差异。在我院部署笔记辅助功能后,逐渐普及。结论:LLM在提高病案摘要的效率和质量方面具有相当大的前景。对于法学硕士辅助文档的成功整合,定期质量评估,持续支持和培训是必不可少的。实施llm可以让临床工作人员专注于更有价值的任务,从而潜在地增强整体医疗保健服务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Leveraging a Large Language Model for Streamlined Medical Record Generation: Implications for Healthcare Informatics.

Objectives: This study aimed to leverage a Large Language Model (LLM) to improve the efficiency and thoroughness of medical record documentation. This study focused on aiding clinical staff in creating structured summaries with the help of an LLM and assessing the quality of these AI-proposed records in comparison to those produced by doctors.

Methods: This strategy involved assembling a team of specialists, including data engineers, physicians, and medical information experts, to develop guidelines for medical summaries produced by an LLM (Llama 3.1), all under the direction of policymakers at the study hospital. The LLM proposes admission, weekly summaries, and discharge notes for physicians to review and edit. A validated Physician Documentation Quality Instrument (PDQI-9) was used to compare the quality of physician-authored and LLM-generated medical records.

Results: The results showed no significant difference was observed in the total PDQI-9 scores between the physician-drafted and AI-created weekly summaries and discharge notes (P = 0.129 and 0.873, respectively). However, there was a significant difference in the total PDQI-9 scores between the physician and AI admission notes (P = 0.004). Furthermore, there were significant differences in item levels between physicians' and AI notes. After deploying the note-assisted function in our hospital, it gradually gained popularity.

Conclusions: LLM shows considerable promise for enhancing the efficiency and quality of medical record summaries. For the successful integration of LLM-assisted documentation, regular quality assessments, continuous support, and training are essential. Implementing LLMs can allow clinical staff to concentrate on more valuable tasks, potentially enhancing overall healthcare delivery.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Clinical Informatics
Applied Clinical Informatics MEDICAL INFORMATICS-
CiteScore
4.60
自引率
24.10%
发文量
132
期刊介绍: ACI is the third Schattauer journal dealing with biomedical and health informatics. It perfectly complements our other journals Öffnet internen Link im aktuellen FensterMethods of Information in Medicine and the Öffnet internen Link im aktuellen FensterYearbook of Medical Informatics. The Yearbook of Medical Informatics being the “Milestone” or state-of-the-art journal and Methods of Information in Medicine being the “Science and Research” journal of IMIA, ACI intends to be the “Practical” journal of IMIA.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信