{"title":"利用大型语言模型识别具有通用性的临床笔记部分。","authors":"Weipeng Zhou, Timothy A Miller","doi":"10.1093/jamiaopen/ooae075","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Clinical note section identification helps locate relevant information and could be beneficial for downstream tasks such as named entity recognition. However, the traditional supervised methods suffer from transferability issues. This study proposes a new framework for using large language models (LLMs) for section identification to overcome the limitations.</p><p><strong>Materials and methods: </strong>We framed section identification as question-answering and provided the section definitions in free-text. We evaluated multiple LLMs off-the-shelf without any training. We also fine-tune our LLMs to investigate how the size and the specificity of the fine-tuning dataset impacts model performance.</p><p><strong>Results: </strong>GPT4 achieved the highest <i>F</i>1 score of 0.77. The best open-source model (Tulu2-70b) achieved 0.64 and is on par with GPT3.5 (ChatGPT). GPT4 is also found to obtain <i>F</i>1 scores greater than 0.9 for 9 out of the 27 (33%) section types and greater than 0.8 for 15 out of 27 (56%) section types. For our fine-tuned models, we found they plateaued with an increasing size of the general domain dataset. We also found that adding a reasonable amount of section identification examples is beneficial.</p><p><strong>Discussion: </strong>These results indicate that GPT4 is nearly production-ready for section identification, and seemingly contains both knowledge of note structure and the ability to follow complex instructions, and the best current open-source LLM is catching up.</p><p><strong>Conclusion: </strong>Our study shows that LLMs are promising for generalizable clinical note section identification. They have the potential to be further improved by adding section identification examples to the fine-tuning dataset.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":"7 3","pages":"ooae075"},"PeriodicalIF":2.5000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11319784/pdf/","citationCount":"0","resultStr":"{\"title\":\"Generalizable clinical note section identification with large language models.\",\"authors\":\"Weipeng Zhou, Timothy A Miller\",\"doi\":\"10.1093/jamiaopen/ooae075\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>Clinical note section identification helps locate relevant information and could be beneficial for downstream tasks such as named entity recognition. However, the traditional supervised methods suffer from transferability issues. This study proposes a new framework for using large language models (LLMs) for section identification to overcome the limitations.</p><p><strong>Materials and methods: </strong>We framed section identification as question-answering and provided the section definitions in free-text. We evaluated multiple LLMs off-the-shelf without any training. We also fine-tune our LLMs to investigate how the size and the specificity of the fine-tuning dataset impacts model performance.</p><p><strong>Results: </strong>GPT4 achieved the highest <i>F</i>1 score of 0.77. The best open-source model (Tulu2-70b) achieved 0.64 and is on par with GPT3.5 (ChatGPT). GPT4 is also found to obtain <i>F</i>1 scores greater than 0.9 for 9 out of the 27 (33%) section types and greater than 0.8 for 15 out of 27 (56%) section types. For our fine-tuned models, we found they plateaued with an increasing size of the general domain dataset. We also found that adding a reasonable amount of section identification examples is beneficial.</p><p><strong>Discussion: </strong>These results indicate that GPT4 is nearly production-ready for section identification, and seemingly contains both knowledge of note structure and the ability to follow complex instructions, and the best current open-source LLM is catching up.</p><p><strong>Conclusion: </strong>Our study shows that LLMs are promising for generalizable clinical note section identification. They have the potential to be further improved by adding section identification examples to the fine-tuning dataset.</p>\",\"PeriodicalId\":36278,\"journal\":{\"name\":\"JAMIA Open\",\"volume\":\"7 3\",\"pages\":\"ooae075\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11319784/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JAMIA Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/jamiaopen/ooae075\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/10/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JAMIA Open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jamiaopen/ooae075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
摘要
目的:临床笔记章节识别有助于定位相关信息,并有利于命名实体识别等下游任务。然而,传统的监督方法存在可移植性问题。本研究提出了一种使用大型语言模型(LLMs)进行章节识别的新框架,以克服上述局限性:我们将章节识别设定为问题解答,并以自由文本形式提供章节定义。我们对多个现成的 LLM 进行了评估,无需任何训练。我们还对 LLM 进行了微调,以研究微调数据集的大小和特异性对模型性能的影响:结果:GPT4 的 F1 分数最高,达到 0.77。最佳开源模型(Tulu2-70b)的 F1 得分为 0.64,与 GPT3.5 (ChatGPT)相当。我们还发现,在 27 个章节类型中,GPT4 有 9 个(33%)的 F1 分数大于 0.9,在 27 个章节类型中,有 15 个(56%)的 F1 分数大于 0.8。对于我们的微调模型,我们发现它们随着通用领域数据集规模的增加而趋于平稳。我们还发现,增加合理数量的断面识别示例也是有益的:这些结果表明,GPT4 几乎可以用于章节识别,而且似乎同时包含音符结构知识和遵循复杂指令的能力,目前最好的开源 LLM 正在迎头赶上:我们的研究表明,LLM 在可通用的临床病历章节识别方面大有可为。结论:我们的研究表明,LLM 在可通用的临床笔记章节识别方面很有前途,通过在微调数据集中添加章节识别示例,LLM 有可能得到进一步改进。
Generalizable clinical note section identification with large language models.
Objectives: Clinical note section identification helps locate relevant information and could be beneficial for downstream tasks such as named entity recognition. However, the traditional supervised methods suffer from transferability issues. This study proposes a new framework for using large language models (LLMs) for section identification to overcome the limitations.
Materials and methods: We framed section identification as question-answering and provided the section definitions in free-text. We evaluated multiple LLMs off-the-shelf without any training. We also fine-tune our LLMs to investigate how the size and the specificity of the fine-tuning dataset impacts model performance.
Results: GPT4 achieved the highest F1 score of 0.77. The best open-source model (Tulu2-70b) achieved 0.64 and is on par with GPT3.5 (ChatGPT). GPT4 is also found to obtain F1 scores greater than 0.9 for 9 out of the 27 (33%) section types and greater than 0.8 for 15 out of 27 (56%) section types. For our fine-tuned models, we found they plateaued with an increasing size of the general domain dataset. We also found that adding a reasonable amount of section identification examples is beneficial.
Discussion: These results indicate that GPT4 is nearly production-ready for section identification, and seemingly contains both knowledge of note structure and the ability to follow complex instructions, and the best current open-source LLM is catching up.
Conclusion: Our study shows that LLMs are promising for generalizable clinical note section identification. They have the potential to be further improved by adding section identification examples to the fine-tuning dataset.