Generalizable clinical note section identification with large language models.

IF 2.5 Q2 HEALTH CARE SCIENCES & SERVICES
JAMIA Open Pub Date : 2024-08-13 eCollection Date: 2024-10-01 DOI:10.1093/jamiaopen/ooae075
Weipeng Zhou, Timothy A Miller
{"title":"Generalizable clinical note section identification with large language models.","authors":"Weipeng Zhou, Timothy A Miller","doi":"10.1093/jamiaopen/ooae075","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Clinical note section identification helps locate relevant information and could be beneficial for downstream tasks such as named entity recognition. However, the traditional supervised methods suffer from transferability issues. This study proposes a new framework for using large language models (LLMs) for section identification to overcome the limitations.</p><p><strong>Materials and methods: </strong>We framed section identification as question-answering and provided the section definitions in free-text. We evaluated multiple LLMs off-the-shelf without any training. We also fine-tune our LLMs to investigate how the size and the specificity of the fine-tuning dataset impacts model performance.</p><p><strong>Results: </strong>GPT4 achieved the highest <i>F</i>1 score of 0.77. The best open-source model (Tulu2-70b) achieved 0.64 and is on par with GPT3.5 (ChatGPT). GPT4 is also found to obtain <i>F</i>1 scores greater than 0.9 for 9 out of the 27 (33%) section types and greater than 0.8 for 15 out of 27 (56%) section types. For our fine-tuned models, we found they plateaued with an increasing size of the general domain dataset. We also found that adding a reasonable amount of section identification examples is beneficial.</p><p><strong>Discussion: </strong>These results indicate that GPT4 is nearly production-ready for section identification, and seemingly contains both knowledge of note structure and the ability to follow complex instructions, and the best current open-source LLM is catching up.</p><p><strong>Conclusion: </strong>Our study shows that LLMs are promising for generalizable clinical note section identification. They have the potential to be further improved by adding section identification examples to the fine-tuning dataset.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"7 3","pages":"ooae075"},"PeriodicalIF":2.5000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11319784/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JAMIA Open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jamiaopen/ooae075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: Clinical note section identification helps locate relevant information and could be beneficial for downstream tasks such as named entity recognition. However, the traditional supervised methods suffer from transferability issues. This study proposes a new framework for using large language models (LLMs) for section identification to overcome the limitations.

Materials and methods: We framed section identification as question-answering and provided the section definitions in free-text. We evaluated multiple LLMs off-the-shelf without any training. We also fine-tune our LLMs to investigate how the size and the specificity of the fine-tuning dataset impacts model performance.

Results: GPT4 achieved the highest F1 score of 0.77. The best open-source model (Tulu2-70b) achieved 0.64 and is on par with GPT3.5 (ChatGPT). GPT4 is also found to obtain F1 scores greater than 0.9 for 9 out of the 27 (33%) section types and greater than 0.8 for 15 out of 27 (56%) section types. For our fine-tuned models, we found they plateaued with an increasing size of the general domain dataset. We also found that adding a reasonable amount of section identification examples is beneficial.

Discussion: These results indicate that GPT4 is nearly production-ready for section identification, and seemingly contains both knowledge of note structure and the ability to follow complex instructions, and the best current open-source LLM is catching up.

Conclusion: Our study shows that LLMs are promising for generalizable clinical note section identification. They have the potential to be further improved by adding section identification examples to the fine-tuning dataset.

利用大型语言模型识别具有通用性的临床笔记部分。
目的:临床笔记章节识别有助于定位相关信息,并有利于命名实体识别等下游任务。然而,传统的监督方法存在可移植性问题。本研究提出了一种使用大型语言模型(LLMs)进行章节识别的新框架,以克服上述局限性:我们将章节识别设定为问题解答,并以自由文本形式提供章节定义。我们对多个现成的 LLM 进行了评估,无需任何训练。我们还对 LLM 进行了微调,以研究微调数据集的大小和特异性对模型性能的影响:结果:GPT4 的 F1 分数最高,达到 0.77。最佳开源模型(Tulu2-70b)的 F1 得分为 0.64,与 GPT3.5 (ChatGPT)相当。我们还发现,在 27 个章节类型中,GPT4 有 9 个(33%)的 F1 分数大于 0.9,在 27 个章节类型中,有 15 个(56%)的 F1 分数大于 0.8。对于我们的微调模型,我们发现它们随着通用领域数据集规模的增加而趋于平稳。我们还发现,增加合理数量的断面识别示例也是有益的:这些结果表明,GPT4 几乎可以用于章节识别,而且似乎同时包含音符结构知识和遵循复杂指令的能力,目前最好的开源 LLM 正在迎头赶上:我们的研究表明,LLM 在可通用的临床病历章节识别方面大有可为。结论:我们的研究表明,LLM 在可通用的临床笔记章节识别方面很有前途,通过在微调数据集中添加章节识别示例,LLM 有可能得到进一步改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JAMIA Open
JAMIA Open Medicine-Health Informatics
CiteScore
4.10
自引率
4.80%
发文量
102
审稿时长
16 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信