Xing Xing, Lifeng Lin, Mohammad Hassan Murad, Jiayi Tong
{"title":"Leveraging AI for Meta-Analysis: Evaluating LLMs in Detecting Publication Bias for Next-Generation Evidence Synthesis","authors":"Xing Xing, Lifeng Lin, Mohammad Hassan Murad, Jiayi Tong","doi":"10.1002/cesm.70047","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Introduction</h3>\n \n <p>Publication bias (PB) threatens the validity of meta-analyses by distorting effect size estimates, potentially leading to misleading conclusions. With advanced pattern recognition and multimodal capabilities, large language models (LLMs) may be able to evaluate PB and make the systematic review process more efficient.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>We evaluated the ability of two state-of-the-art multimodal LLMs, GPT-4o and Llama 3.2 Vision, to detect PB using funnel plots alone and in combination with quantitative inputs. We simulated meta-analyses under varying conditions, including the absence of PB, different levels of presence of PB, varying total number of studies within a meta-analysis, and differing degrees of between-study heterogeneity.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Neither GPT-4o nor Llama 3.2 Vision consistently detected the presence of PB across various settings. Under no-publication-bias conditions, GPT-4o achieved a higher specificity outperforming Llama 3.2 Vision, with the difference most shown in the meta-analyses with 20 or more studies. The inclusion of quantitative inputs alongside funnel plots did not significantly improve performance. Additionally, between-study heterogeneity and patterns of non-reported studies had minimal impact on the models’ assessments.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>The ability of LLMs to detect PB without fine-tuning is limited at the present time. This study highlights the need for specialized model adaptation before LLMs can be effectively integrated into meta-analysis workflows. Future research can focus on targeted refinements to enhance LLM performance and utility in evidence synthesis.</p>\n </section>\n </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"3 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70047","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cochrane Evidence Synthesis and Methods","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cesm.70047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction
Publication bias (PB) threatens the validity of meta-analyses by distorting effect size estimates, potentially leading to misleading conclusions. With advanced pattern recognition and multimodal capabilities, large language models (LLMs) may be able to evaluate PB and make the systematic review process more efficient.
Methods
We evaluated the ability of two state-of-the-art multimodal LLMs, GPT-4o and Llama 3.2 Vision, to detect PB using funnel plots alone and in combination with quantitative inputs. We simulated meta-analyses under varying conditions, including the absence of PB, different levels of presence of PB, varying total number of studies within a meta-analysis, and differing degrees of between-study heterogeneity.
Results
Neither GPT-4o nor Llama 3.2 Vision consistently detected the presence of PB across various settings. Under no-publication-bias conditions, GPT-4o achieved a higher specificity outperforming Llama 3.2 Vision, with the difference most shown in the meta-analyses with 20 or more studies. The inclusion of quantitative inputs alongside funnel plots did not significantly improve performance. Additionally, between-study heterogeneity and patterns of non-reported studies had minimal impact on the models’ assessments.
Conclusions
The ability of LLMs to detect PB without fine-tuning is limited at the present time. This study highlights the need for specialized model adaptation before LLMs can be effectively integrated into meta-analysis workflows. Future research can focus on targeted refinements to enhance LLM performance and utility in evidence synthesis.