Yi-Ling Chiang, Kuei-Fen Yang, Pin-Chih Su, Shang-Feng Tsai, Kai-Li Liang
{"title":"Leveraging a Large Language Model for Streamlined Medical Record Generation: Implications for Healthcare Informatics.","authors":"Yi-Ling Chiang, Kuei-Fen Yang, Pin-Chih Su, Shang-Feng Tsai, Kai-Li Liang","doi":"10.1055/a-2707-2959","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to leverage a Large Language Model (LLM) to improve the efficiency and thoroughness of medical record documentation. This study focused on aiding clinical staff in creating structured summaries with the help of an LLM and assessing the quality of these AI-proposed records in comparison to those produced by doctors.</p><p><strong>Methods: </strong>This strategy involved assembling a team of specialists, including data engineers, physicians, and medical information experts, to develop guidelines for medical summaries produced by an LLM (Llama 3.1), all under the direction of policymakers at the study hospital. The LLM proposes admission, weekly summaries, and discharge notes for physicians to review and edit. A validated Physician Documentation Quality Instrument (PDQI-9) was used to compare the quality of physician-authored and LLM-generated medical records.</p><p><strong>Results: </strong>The results showed no significant difference was observed in the total PDQI-9 scores between the physician-drafted and AI-created weekly summaries and discharge notes (P = 0.129 and 0.873, respectively). However, there was a significant difference in the total PDQI-9 scores between the physician and AI admission notes (P = 0.004). Furthermore, there were significant differences in item levels between physicians' and AI notes. After deploying the note-assisted function in our hospital, it gradually gained popularity.</p><p><strong>Conclusions: </strong>LLM shows considerable promise for enhancing the efficiency and quality of medical record summaries. For the successful integration of LLM-assisted documentation, regular quality assessments, continuous support, and training are essential. Implementing LLMs can allow clinical staff to concentrate on more valuable tasks, potentially enhancing overall healthcare delivery.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Clinical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-2707-2959","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: This study aimed to leverage a Large Language Model (LLM) to improve the efficiency and thoroughness of medical record documentation. This study focused on aiding clinical staff in creating structured summaries with the help of an LLM and assessing the quality of these AI-proposed records in comparison to those produced by doctors.
Methods: This strategy involved assembling a team of specialists, including data engineers, physicians, and medical information experts, to develop guidelines for medical summaries produced by an LLM (Llama 3.1), all under the direction of policymakers at the study hospital. The LLM proposes admission, weekly summaries, and discharge notes for physicians to review and edit. A validated Physician Documentation Quality Instrument (PDQI-9) was used to compare the quality of physician-authored and LLM-generated medical records.
Results: The results showed no significant difference was observed in the total PDQI-9 scores between the physician-drafted and AI-created weekly summaries and discharge notes (P = 0.129 and 0.873, respectively). However, there was a significant difference in the total PDQI-9 scores between the physician and AI admission notes (P = 0.004). Furthermore, there were significant differences in item levels between physicians' and AI notes. After deploying the note-assisted function in our hospital, it gradually gained popularity.
Conclusions: LLM shows considerable promise for enhancing the efficiency and quality of medical record summaries. For the successful integration of LLM-assisted documentation, regular quality assessments, continuous support, and training are essential. Implementing LLMs can allow clinical staff to concentrate on more valuable tasks, potentially enhancing overall healthcare delivery.
期刊介绍:
ACI is the third Schattauer journal dealing with biomedical and health informatics. It perfectly complements our other journals Öffnet internen Link im aktuellen FensterMethods of Information in Medicine and the Öffnet internen Link im aktuellen FensterYearbook of Medical Informatics. The Yearbook of Medical Informatics being the “Milestone” or state-of-the-art journal and Methods of Information in Medicine being the “Science and Research” journal of IMIA, ACI intends to be the “Practical” journal of IMIA.