Howard Lopes Ribeiro Junior, Francisco Washington Araújo Barros Nepomuceno, Cláudia do Ó Pessoa, Mauer Alexandre da Ascensão Gonçalves
{"title":"Systematic Reviews in the Age of AI: Are We Sacrificing Rigor for Volume?","authors":"Howard Lopes Ribeiro Junior, Francisco Washington Araújo Barros Nepomuceno, Cláudia do Ó Pessoa, Mauer Alexandre da Ascensão Gonçalves","doi":"10.1002/cesm.70080","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Introduction</h3>\n \n <p>Systematic reviews occupy a central position in evidence hierarchies, providing structured syntheses intended to inform clinical decision-making and health policy. However, the rapid expansion of artificial intelligence (AI) tools in literature searching, screening, data extraction, and manuscript drafting is transforming how these reviews are produced. Concurrently, the number of prospectively registered systematic reviews has grown substantially, with recent increases in PROSPERO registrations highlighting an accelerating output of evidence syntheses. While technological advances promise efficiency and scalability, they also raise concerns regarding methodological rigor, redundancy, and transparency.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>This viewpoint argues that the current reporting and governance frameworks for systematic reviews remain largely anchored in pre-AI workflows.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Ongoing updates to reporting standards, including PRISMA revisions, have yet to fully address key challenges introduced by AI-assisted methodologies, such as algorithmic bias, auditability, reproducibility limitations of proprietary models, and the need to document human oversight. The absence of explicit guidance for reporting AI use creates a critical transparency gap, potentially undermining confidence in systematic reviews and increasing the risk of superficial or duplicated syntheses.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>We propose that the evidence-synthesis ecosystem requires urgent adaptation, including the development of a PRISMA-AI extension, strengthened metadata requirements in registries such as PROSPERO, and updated editorial policies for AI-assisted reviews. Safeguarding rigor in the age of automated science is essential to maintain the credibility and clinical utility of systematic reviews.</p>\n </section>\n </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"4 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2026-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70080","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cochrane Evidence Synthesis and Methods","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cesm.70080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction
Systematic reviews occupy a central position in evidence hierarchies, providing structured syntheses intended to inform clinical decision-making and health policy. However, the rapid expansion of artificial intelligence (AI) tools in literature searching, screening, data extraction, and manuscript drafting is transforming how these reviews are produced. Concurrently, the number of prospectively registered systematic reviews has grown substantially, with recent increases in PROSPERO registrations highlighting an accelerating output of evidence syntheses. While technological advances promise efficiency and scalability, they also raise concerns regarding methodological rigor, redundancy, and transparency.
Methods
This viewpoint argues that the current reporting and governance frameworks for systematic reviews remain largely anchored in pre-AI workflows.
Results
Ongoing updates to reporting standards, including PRISMA revisions, have yet to fully address key challenges introduced by AI-assisted methodologies, such as algorithmic bias, auditability, reproducibility limitations of proprietary models, and the need to document human oversight. The absence of explicit guidance for reporting AI use creates a critical transparency gap, potentially undermining confidence in systematic reviews and increasing the risk of superficial or duplicated syntheses.
Conclusion
We propose that the evidence-synthesis ecosystem requires urgent adaptation, including the development of a PRISMA-AI extension, strengthened metadata requirements in registries such as PROSPERO, and updated editorial policies for AI-assisted reviews. Safeguarding rigor in the age of automated science is essential to maintain the credibility and clinical utility of systematic reviews.