Eyal Klang, Jaskirat Gill, Aniket Sharma, Evan Leibner, Moein Sabounchi, Robert Freeman, Roopa Kohli-Seth, Patricia Kovatch, Alexander Charney, Lisa Stump, David Reich, Girish Nadkarni, Ankit Sakhuja
{"title":"Summarize-then-Prompt: A Novel Prompt Engineering Strategy for Generating High-Quality Discharge Summaries.","authors":"Eyal Klang, Jaskirat Gill, Aniket Sharma, Evan Leibner, Moein Sabounchi, Robert Freeman, Roopa Kohli-Seth, Patricia Kovatch, Alexander Charney, Lisa Stump, David Reich, Girish Nadkarni, Ankit Sakhuja","doi":"10.1055/a-2617-6572","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Accurate discharge summaries are essential for effective communication between hospital and outpatient providers but generating them is labor-intensive. Large language models (LLMs), such as GPT-4, have shown promise in automating this process, potentially reducing clinician workload and improving documentation quality. A recent study using GPT-4 to generate discharge summaries via concatenated clinical notes found that while the summaries were concise and coherent, they often lacked comprehensiveness and contained errors. To address this, we evaluated a structured prompting strategy, summarize-then-prompt, which first generates concise summaries of individual clinical notes before combining them to create a more focused input for the LLM.</p><p><strong>Objectives: </strong>The objective of this study was to assess the effectiveness of a novel prompting strategy, summarize-then-prompt, in generating discharge summaries that are more complete, accurate, and concise in comparison to the approach that simply concatenates clinical notes.</p><p><strong>Methods: </strong>We conducted a retrospective study comparing two prompting strategies: direct concatenation (M1) and summarize-then-prompt (M2). A random sample of 50 hospital stays was selected from a large hospital system. Three attending physicians independently evaluated the generated hospital course summaries for completeness, correctness, and conciseness using a 5-point Likert scale.</p><p><strong>Results: </strong>The summarize-then-prompt strategy outperformed direct concatenation strategy in both completeness (4.28 ± 0.63 vs. 4.01 ± 0.69, p < 0.001) and correctness (4.37 ± 0.54 vs. 4.17 ± 0.57, p = 0.002) of the summarization of the hospital course. However, the two strategies showed no significant difference in conciseness (p = 0.308).</p><p><strong>Conclusion: </strong>Summarizing individual notes before concatenation improves LLM-generated discharge summaries, enhancing their completeness and accuracy without sacrificing conciseness. This approach may facilitate the integration of LLMs into clinical workflows, offering a promising strategy for automating discharge summary generation and could reduce clinician burden.</p>","PeriodicalId":48956,"journal":{"name":"Applied Clinical Informatics","volume":" ","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Clinical Informatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1055/a-2617-6572","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MEDICAL INFORMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Accurate discharge summaries are essential for effective communication between hospital and outpatient providers but generating them is labor-intensive. Large language models (LLMs), such as GPT-4, have shown promise in automating this process, potentially reducing clinician workload and improving documentation quality. A recent study using GPT-4 to generate discharge summaries via concatenated clinical notes found that while the summaries were concise and coherent, they often lacked comprehensiveness and contained errors. To address this, we evaluated a structured prompting strategy, summarize-then-prompt, which first generates concise summaries of individual clinical notes before combining them to create a more focused input for the LLM.
Objectives: The objective of this study was to assess the effectiveness of a novel prompting strategy, summarize-then-prompt, in generating discharge summaries that are more complete, accurate, and concise in comparison to the approach that simply concatenates clinical notes.
Methods: We conducted a retrospective study comparing two prompting strategies: direct concatenation (M1) and summarize-then-prompt (M2). A random sample of 50 hospital stays was selected from a large hospital system. Three attending physicians independently evaluated the generated hospital course summaries for completeness, correctness, and conciseness using a 5-point Likert scale.
Results: The summarize-then-prompt strategy outperformed direct concatenation strategy in both completeness (4.28 ± 0.63 vs. 4.01 ± 0.69, p < 0.001) and correctness (4.37 ± 0.54 vs. 4.17 ± 0.57, p = 0.002) of the summarization of the hospital course. However, the two strategies showed no significant difference in conciseness (p = 0.308).
Conclusion: Summarizing individual notes before concatenation improves LLM-generated discharge summaries, enhancing their completeness and accuracy without sacrificing conciseness. This approach may facilitate the integration of LLMs into clinical workflows, offering a promising strategy for automating discharge summary generation and could reduce clinician burden.
期刊介绍:
ACI is the third Schattauer journal dealing with biomedical and health informatics. It perfectly complements our other journals Öffnet internen Link im aktuellen FensterMethods of Information in Medicine and the Öffnet internen Link im aktuellen FensterYearbook of Medical Informatics. The Yearbook of Medical Informatics being the “Milestone” or state-of-the-art journal and Methods of Information in Medicine being the “Science and Research” journal of IMIA, ACI intends to be the “Practical” journal of IMIA.