Weilong Zhao, Danni Xia, Ziying Ye, Honghao Lai, Mingyao Sun, Jiajie Huang, Jiayi Liu, Jianing Liu, Long Ge
{"title":"From Evidence to Recommendations With Large Language Models: A Feasibility Study.","authors":"Weilong Zhao, Danni Xia, Ziying Ye, Honghao Lai, Mingyao Sun, Jiajie Huang, Jiayi Liu, Jianing Liu, Long Ge","doi":"10.1111/jebm.70067","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Formulating evidene-based recommendations for practice guidelines is a complex process that requires substantial expertise. Artificial intelligence (AI) is promising in accelerating the guideline development process. This study evaluates the feasibility of leveraging five large language models (LLMs)-ChatGPT-3.5, Claude-3 sonnet, Bard, ChatGLM-4, Kimi chat-to generate recommendations based on structured evidence, assesses their concordance, and explores the potential for AI.</p><p><strong>Methods: </strong>The general and specific prompts were drafted and validated. We searched PubMed to include evidence-based guidelines related to health and lifestyle. We randomly selected one recommendation from every included guideline as the sample and extracted the evidence base supporting the selected recommendations. The prompts and evidence were fed into five LLMs to generate structured recommendations.</p><p><strong>Results: </strong>ChatGPT-3.5 demonstrated the highest proficiency in comprehensively extracting and synthesizing evidence to formulate novel insights. Bard consistently adhered to existing guideline principles, aligning its algorithm with these tenets. Claude generated fewer topical recommendations, focusing instead on evidence analysis and mitigating irrelevant information. ChatGLM-4 exhibited a balanced approach, combining evidence extraction with adherence to guideline principles. Kimi showed potential in generating concise and targeted recommendations. Among the six generated recommendations, average consistency ranged from 50% to 91.7%.</p><p><strong>Conclusion: </strong>The findings of this study suggest that LLMs hold immense potential in accelerating the formulation of evidence-based recommendations. LLMs can rapidly and comprehensively extract and synthesize relevant information from structured evidence, generating recommendations that align with the available evidence.</p>","PeriodicalId":16090,"journal":{"name":"Journal of Evidence‐Based Medicine","volume":" ","pages":"e70067"},"PeriodicalIF":3.5000,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Evidence‐Based Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/jebm.70067","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Formulating evidene-based recommendations for practice guidelines is a complex process that requires substantial expertise. Artificial intelligence (AI) is promising in accelerating the guideline development process. This study evaluates the feasibility of leveraging five large language models (LLMs)-ChatGPT-3.5, Claude-3 sonnet, Bard, ChatGLM-4, Kimi chat-to generate recommendations based on structured evidence, assesses their concordance, and explores the potential for AI.
Methods: The general and specific prompts were drafted and validated. We searched PubMed to include evidence-based guidelines related to health and lifestyle. We randomly selected one recommendation from every included guideline as the sample and extracted the evidence base supporting the selected recommendations. The prompts and evidence were fed into five LLMs to generate structured recommendations.
Results: ChatGPT-3.5 demonstrated the highest proficiency in comprehensively extracting and synthesizing evidence to formulate novel insights. Bard consistently adhered to existing guideline principles, aligning its algorithm with these tenets. Claude generated fewer topical recommendations, focusing instead on evidence analysis and mitigating irrelevant information. ChatGLM-4 exhibited a balanced approach, combining evidence extraction with adherence to guideline principles. Kimi showed potential in generating concise and targeted recommendations. Among the six generated recommendations, average consistency ranged from 50% to 91.7%.
Conclusion: The findings of this study suggest that LLMs hold immense potential in accelerating the formulation of evidence-based recommendations. LLMs can rapidly and comprehensively extract and synthesize relevant information from structured evidence, generating recommendations that align with the available evidence.
期刊介绍:
The Journal of Evidence-Based Medicine (EMB) is an esteemed international healthcare and medical decision-making journal, dedicated to publishing groundbreaking research outcomes in evidence-based decision-making, research, practice, and education. Serving as the official English-language journal of the Cochrane China Centre and West China Hospital of Sichuan University, we eagerly welcome editorials, commentaries, and systematic reviews encompassing various topics such as clinical trials, policy, drug and patient safety, education, and knowledge translation.