PromptLink: Leveraging Large Language Models for Cross-Source Biomedical Concept Linking.

Yuzhang Xie, Jiaying Lu, Joyce Ho, Fadi Nahab, Xiao Hu, Carl Yang
{"title":"PromptLink: Leveraging Large Language Models for Cross-Source Biomedical Concept Linking.","authors":"Yuzhang Xie, Jiaying Lu, Joyce Ho, Fadi Nahab, Xiao Hu, Carl Yang","doi":"10.1145/3626772.3657904","DOIUrl":null,"url":null,"abstract":"<p><p>Linking (aligning) biomedical concepts across diverse data sources enables various integrative analyses, but it is challenging due to the discrepancies in concept naming conventions. Various strategies have been developed to overcome this challenge, such as those based on string-matching rules, manually crafted thesauri, and machine learning models. However, these methods are constrained by limited prior biomedical knowledge and can hardly generalize beyond the limited amounts of rules, thesauri, or training samples. Recently, large language models (LLMs) have exhibited impressive results in diverse biomedical NLP tasks due to their unprecedentedly rich prior knowledge and strong zero-shot prediction abilities. However, LLMs suffer from issues including high costs, limited context length, and unreliable predictions. In this research, we propose PromptLink, a novel biomedical concept linking framework that leverages LLMs. It first employs a biomedical-specialized pre-trained language model to generate candidate concepts that can fit in the LLM context windows. Then it utilizes an LLM to link concepts through two-stage prompts, where the first-stage prompt aims to elicit the biomedical prior knowledge from the LLM for the concept linking task and the second-stage prompt enforces the LLM to reflect on its own predictions to further enhance their reliability. Empirical results on the concept linking task between two EHR datasets and an external biomedical KG demonstrate the effectiveness of PromptLink. Furthermore, PromptLink is a generic framework without reliance on additional prior knowledge, context, or training data, making it well-suited for concept linking across various types of data sources. The source code of this study is available at https://github.com/constantjxyz/PromptLink.</p>","PeriodicalId":520431,"journal":{"name":"International ACM SIGIR Conference on Research and Development in Information Retrieval. Annual International ACMSIGIR Conference on Research & Development in Information Retrieval","volume":"2024 ","pages":"2589-2593"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11867735/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International ACM SIGIR Conference on Research and Development in Information Retrieval. Annual International ACMSIGIR Conference on Research & Development in Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3626772.3657904","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/11 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Linking (aligning) biomedical concepts across diverse data sources enables various integrative analyses, but it is challenging due to the discrepancies in concept naming conventions. Various strategies have been developed to overcome this challenge, such as those based on string-matching rules, manually crafted thesauri, and machine learning models. However, these methods are constrained by limited prior biomedical knowledge and can hardly generalize beyond the limited amounts of rules, thesauri, or training samples. Recently, large language models (LLMs) have exhibited impressive results in diverse biomedical NLP tasks due to their unprecedentedly rich prior knowledge and strong zero-shot prediction abilities. However, LLMs suffer from issues including high costs, limited context length, and unreliable predictions. In this research, we propose PromptLink, a novel biomedical concept linking framework that leverages LLMs. It first employs a biomedical-specialized pre-trained language model to generate candidate concepts that can fit in the LLM context windows. Then it utilizes an LLM to link concepts through two-stage prompts, where the first-stage prompt aims to elicit the biomedical prior knowledge from the LLM for the concept linking task and the second-stage prompt enforces the LLM to reflect on its own predictions to further enhance their reliability. Empirical results on the concept linking task between two EHR datasets and an external biomedical KG demonstrate the effectiveness of PromptLink. Furthermore, PromptLink is a generic framework without reliance on additional prior knowledge, context, or training data, making it well-suited for concept linking across various types of data sources. The source code of this study is available at https://github.com/constantjxyz/PromptLink.

PromptLink:利用大型语言模型进行跨源生物医学概念链接。
跨不同数据源链接(对齐)生物医学概念可以实现各种综合分析,但由于概念命名约定的差异,这是具有挑战性的。已经开发了各种策略来克服这一挑战,例如基于字符串匹配规则、手工制作的词典和机器学习模型的策略。然而,这些方法受到有限的先前生物医学知识的限制,很难推广到有限数量的规则、词典或训练样本之外。近年来,大型语言模型(llm)由于其前所未有的丰富先验知识和强大的零概率预测能力,在各种生物医学NLP任务中表现出令人印象深刻的结果。然而,法学硕士面临的问题包括高成本、有限的上下文长度和不可靠的预测。在这项研究中,我们提出了PromptLink,一个利用llm的新型生物医学概念链接框架。它首先采用生物医学专业的预训练语言模型来生成适合法学硕士上下文窗口的候选概念。然后利用LLM通过两阶段提示来链接概念,其中第一阶段提示旨在从LLM中引出生物医学先验知识来进行概念链接任务,第二阶段提示要求LLM对自己的预测进行反思,以进一步提高其可靠性。在两个电子病历数据集和外部生物医学KG之间的概念链接任务的实证结果证明了PromptLink的有效性。此外,PromptLink是一个通用框架,不依赖于额外的先验知识、上下文或训练数据,因此非常适合跨各种类型数据源的概念链接。本研究的源代码可在https://github.com/constantjxyz/PromptLink上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信