ZeroTuneBio NER:一个使用大型语言模型和快速工程的三阶段框架,用于零射击和零调谐生物医学实体提取

IF 4.8 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Mingyuan Qin , Lei Feng , Jing Lu , Ziyan Sun , Zhengyu Yu , Lianyi Han
{"title":"ZeroTuneBio NER:一个使用大型语言模型和快速工程的三阶段框架,用于零射击和零调谐生物医学实体提取","authors":"Mingyuan Qin ,&nbsp;Lei Feng ,&nbsp;Jing Lu ,&nbsp;Ziyan Sun ,&nbsp;Zhengyu Yu ,&nbsp;Lianyi Han","doi":"10.1016/j.cmpb.2025.109070","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>This study aims to (1) enhance the performance of large language models (LLMs) in biomedical entity extraction, (2) investigate zero-shot named entity recognition (NER) capabilities without fine-tuning, and (3) compare the proposed framework with existing models and human annotation methods. Additionally, we analyze discrepancies between human and LLM-generated annotations to refine manual labeling processes for specialized datasets.</div></div><div><h3>Materials and Methods</h3><div>We propose <strong>ZeroTuneBio NER</strong>, a three-stage NER framework integrating chain-of-thought reasoning and prompt engineering. Evaluated on three public datasets (disease, chemistry, and gene), the method requires no task-specific examples or LLM fine-tuning, addressing challenges in complex concept interpretation.</div></div><div><h3>Results</h3><div>ZeroTuneBio NER excels in tasks without strict matching, achieving an average F1-score improvement of <strong>0.28</strong> over direct LLM queries and a partial-matching F1-score of <strong>∼88</strong> <strong>%</strong>. It rivals the performance of a fine-tuned LLaMA model trained on <strong>11,240 examples</strong> and surpasses BioBERT trained on <strong>22,480 examples</strong> when strict-matching errors are excluded. Notably, LLMs significantly optimize manual annotation, accelerating speed and reducing costs.</div></div><div><h3>Conclusion</h3><div>ZeroTuneBio NER demonstrates that LLMs can perform high-quality NER without fine-tuning, reducing reliance on manual annotation. The framework broadens LLM applications in biomedical NER, while our analysis highlights its scalability and future research directions.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"272 ","pages":"Article 109070"},"PeriodicalIF":4.8000,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ZeroTuneBio NER: A three-stage framework for zero-shot and zero-tuning biomedical entity extraction using large language models and prompt engineering\",\"authors\":\"Mingyuan Qin ,&nbsp;Lei Feng ,&nbsp;Jing Lu ,&nbsp;Ziyan Sun ,&nbsp;Zhengyu Yu ,&nbsp;Lianyi Han\",\"doi\":\"10.1016/j.cmpb.2025.109070\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Objective</h3><div>This study aims to (1) enhance the performance of large language models (LLMs) in biomedical entity extraction, (2) investigate zero-shot named entity recognition (NER) capabilities without fine-tuning, and (3) compare the proposed framework with existing models and human annotation methods. Additionally, we analyze discrepancies between human and LLM-generated annotations to refine manual labeling processes for specialized datasets.</div></div><div><h3>Materials and Methods</h3><div>We propose <strong>ZeroTuneBio NER</strong>, a three-stage NER framework integrating chain-of-thought reasoning and prompt engineering. Evaluated on three public datasets (disease, chemistry, and gene), the method requires no task-specific examples or LLM fine-tuning, addressing challenges in complex concept interpretation.</div></div><div><h3>Results</h3><div>ZeroTuneBio NER excels in tasks without strict matching, achieving an average F1-score improvement of <strong>0.28</strong> over direct LLM queries and a partial-matching F1-score of <strong>∼88</strong> <strong>%</strong>. It rivals the performance of a fine-tuned LLaMA model trained on <strong>11,240 examples</strong> and surpasses BioBERT trained on <strong>22,480 examples</strong> when strict-matching errors are excluded. Notably, LLMs significantly optimize manual annotation, accelerating speed and reducing costs.</div></div><div><h3>Conclusion</h3><div>ZeroTuneBio NER demonstrates that LLMs can perform high-quality NER without fine-tuning, reducing reliance on manual annotation. The framework broadens LLM applications in biomedical NER, while our analysis highlights its scalability and future research directions.</div></div>\",\"PeriodicalId\":10624,\"journal\":{\"name\":\"Computer methods and programs in biomedicine\",\"volume\":\"272 \",\"pages\":\"Article 109070\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2025-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer methods and programs in biomedicine\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0169260725004870\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer methods and programs in biomedicine","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0169260725004870","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

本研究旨在:(1)提高大型语言模型(llm)在生物医学实体提取中的性能;(2)研究无微调的零采样命名实体识别(NER)能力;(3)将提出的框架与现有模型和人工标注方法进行比较。此外,我们分析了人类和llm生成的注释之间的差异,以改进专门数据集的手动标记过程。材料和方法我们提出了ZeroTuneBio NER,这是一个集成了思维链推理和提示工程的三阶段NER框架。在三个公共数据集(疾病、化学和基因)上进行评估,该方法不需要特定任务的示例或LLM微调,可以解决复杂概念解释中的挑战。结果zerotunebio NER在没有严格匹配的任务中表现出色,与直接LLM查询相比,平均f1分数提高了0.28,部分匹配的f1分数提高了~ 88%。当排除严格匹配误差时,它的性能可以与经过11,240个样本训练的微调LLaMA模型相媲美,并超过经过22,480个样本训练的BioBERT。值得注意的是,llm显著优化了手工标注,加快了速度,降低了成本。结论zerotunebio NER证明llm可以在没有微调的情况下执行高质量的NER,减少了对手动注释的依赖。该框架拓宽了LLM在生物医学NER中的应用,而我们的分析强调了其可扩展性和未来的研究方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ZeroTuneBio NER: A three-stage framework for zero-shot and zero-tuning biomedical entity extraction using large language models and prompt engineering

Objective

This study aims to (1) enhance the performance of large language models (LLMs) in biomedical entity extraction, (2) investigate zero-shot named entity recognition (NER) capabilities without fine-tuning, and (3) compare the proposed framework with existing models and human annotation methods. Additionally, we analyze discrepancies between human and LLM-generated annotations to refine manual labeling processes for specialized datasets.

Materials and Methods

We propose ZeroTuneBio NER, a three-stage NER framework integrating chain-of-thought reasoning and prompt engineering. Evaluated on three public datasets (disease, chemistry, and gene), the method requires no task-specific examples or LLM fine-tuning, addressing challenges in complex concept interpretation.

Results

ZeroTuneBio NER excels in tasks without strict matching, achieving an average F1-score improvement of 0.28 over direct LLM queries and a partial-matching F1-score of ∼88 %. It rivals the performance of a fine-tuned LLaMA model trained on 11,240 examples and surpasses BioBERT trained on 22,480 examples when strict-matching errors are excluded. Notably, LLMs significantly optimize manual annotation, accelerating speed and reducing costs.

Conclusion

ZeroTuneBio NER demonstrates that LLMs can perform high-quality NER without fine-tuning, reducing reliance on manual annotation. The framework broadens LLM applications in biomedical NER, while our analysis highlights its scalability and future research directions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer methods and programs in biomedicine
Computer methods and programs in biomedicine 工程技术-工程:生物医学
CiteScore
12.30
自引率
6.60%
发文量
601
审稿时长
135 days
期刊介绍: To encourage the development of formal computing methods, and their application in biomedical research and medical practice, by illustration of fundamental principles in biomedical informatics research; to stimulate basic research into application software design; to report the state of research of biomedical information processing projects; to report new computer methodologies applied in biomedical areas; the eventual distribution of demonstrable software to avoid duplication of effort; to provide a forum for discussion and improvement of existing software; to optimize contact between national organizations and regional user groups by promoting an international exchange of information on formal methods, standards and software in biomedicine. Computer Methods and Programs in Biomedicine covers computing methodology and software systems derived from computing science for implementation in all aspects of biomedical research and medical practice. It is designed to serve: biochemists; biologists; geneticists; immunologists; neuroscientists; pharmacologists; toxicologists; clinicians; epidemiologists; psychiatrists; psychologists; cardiologists; chemists; (radio)physicists; computer scientists; programmers and systems analysts; biomedical, clinical, electrical and other engineers; teachers of medical informatics and users of educational software.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信