Improving Radiology Report Conciseness and Structure via Local Large Language Models.

Iryna Hartsock, Cyrillo Araujo, Les Folio, Ghulam Rasool
{"title":"Improving Radiology Report Conciseness and Structure via Local Large Language Models.","authors":"Iryna Hartsock, Cyrillo Araujo, Les Folio, Ghulam Rasool","doi":"10.1007/s10278-025-01510-w","DOIUrl":null,"url":null,"abstract":"<p><p>Radiology reports are often lengthy and unstructured, posing challenges for referring physicians to quickly identify critical imaging findings while increasing risk of missed information. This retrospective study aimed to enhance radiology reports by making them concise and well-structured, with findings organized by relevant organs. To achieve this, we utilized private large language models (LLMs) deployed locally within our institution's firewall, ensuring data security and minimizing computational costs. Using a dataset of 814 radiology reports from seven board-certified body radiologists at [-blinded for review-], we tested five prompting strategies within the LangChain framework. After evaluating several models, the Mixtral LLM demonstrated superior adherence to formatting requirements compared to alternatives like Llama. The optimal strategy involved condensing reports first and then applying structured formatting based on specific instructions, reducing verbosity while improving clarity. Across all radiologists and reports, the Mixtral LLM reduced redundant word counts by more than 53%. These findings highlight the potential of locally deployed, open-source LLMs to streamline radiology reporting. By generating concise, well-structured reports, these models enhance information retrieval and better meet the needs of referring physicians, ultimately improving clinical workflows.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-025-01510-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Radiology reports are often lengthy and unstructured, posing challenges for referring physicians to quickly identify critical imaging findings while increasing risk of missed information. This retrospective study aimed to enhance radiology reports by making them concise and well-structured, with findings organized by relevant organs. To achieve this, we utilized private large language models (LLMs) deployed locally within our institution's firewall, ensuring data security and minimizing computational costs. Using a dataset of 814 radiology reports from seven board-certified body radiologists at [-blinded for review-], we tested five prompting strategies within the LangChain framework. After evaluating several models, the Mixtral LLM demonstrated superior adherence to formatting requirements compared to alternatives like Llama. The optimal strategy involved condensing reports first and then applying structured formatting based on specific instructions, reducing verbosity while improving clarity. Across all radiologists and reports, the Mixtral LLM reduced redundant word counts by more than 53%. These findings highlight the potential of locally deployed, open-source LLMs to streamline radiology reporting. By generating concise, well-structured reports, these models enhance information retrieval and better meet the needs of referring physicians, ultimately improving clinical workflows.

利用局部大语言模型改进放射学报告的简洁性和结构。
放射学报告通常冗长且无结构,这给转诊医生快速识别关键影像发现带来了挑战,同时增加了遗漏信息的风险。本回顾性研究的目的是提高放射学报告的简练和结构良好,结果由相关器官组织。为了实现这一点,我们使用了私有的大型语言模型(llm),部署在我们机构的防火墙中,以确保数据安全性并将计算成本降至最低。使用来自7位委员会认证的放射学家的814份放射学报告的数据集,我们在LangChain框架内测试了5种提示策略。在对几种模型进行评估后,与Llama等替代方案相比,Mixtral LLM显示出对格式要求的优越依从性。最优策略包括首先压缩报告,然后根据具体指示应用结构化格式,在提高清晰度的同时减少冗长。在所有放射科医生和报告中,midtral LLM减少了53%以上的冗余字数。这些发现突出了本地部署的开源法学硕士在简化放射学报告方面的潜力。通过生成简洁、结构良好的报告,这些模型增强了信息检索,更好地满足转诊医生的需求,最终改善了临床工作流程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信