Systematic Reviews in the Age of AI: Are We Sacrificing Rigor for Volume?

Howard Lopes Ribeiro Junior, Francisco Washington Araújo Barros Nepomuceno, Cláudia do Ó Pessoa, Mauer Alexandre da Ascensão Gonçalves
{"title":"Systematic Reviews in the Age of AI: Are We Sacrificing Rigor for Volume?","authors":"Howard Lopes Ribeiro Junior,&nbsp;Francisco Washington Araújo Barros Nepomuceno,&nbsp;Cláudia do Ó Pessoa,&nbsp;Mauer Alexandre da Ascensão Gonçalves","doi":"10.1002/cesm.70080","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Introduction</h3>\n \n <p>Systematic reviews occupy a central position in evidence hierarchies, providing structured syntheses intended to inform clinical decision-making and health policy. However, the rapid expansion of artificial intelligence (AI) tools in literature searching, screening, data extraction, and manuscript drafting is transforming how these reviews are produced. Concurrently, the number of prospectively registered systematic reviews has grown substantially, with recent increases in PROSPERO registrations highlighting an accelerating output of evidence syntheses. While technological advances promise efficiency and scalability, they also raise concerns regarding methodological rigor, redundancy, and transparency.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>This viewpoint argues that the current reporting and governance frameworks for systematic reviews remain largely anchored in pre-AI workflows.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Ongoing updates to reporting standards, including PRISMA revisions, have yet to fully address key challenges introduced by AI-assisted methodologies, such as algorithmic bias, auditability, reproducibility limitations of proprietary models, and the need to document human oversight. The absence of explicit guidance for reporting AI use creates a critical transparency gap, potentially undermining confidence in systematic reviews and increasing the risk of superficial or duplicated syntheses.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>We propose that the evidence-synthesis ecosystem requires urgent adaptation, including the development of a PRISMA-AI extension, strengthened metadata requirements in registries such as PROSPERO, and updated editorial policies for AI-assisted reviews. Safeguarding rigor in the age of automated science is essential to maintain the credibility and clinical utility of systematic reviews.</p>\n </section>\n </div>","PeriodicalId":100286,"journal":{"name":"Cochrane Evidence Synthesis and Methods","volume":"4 3","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2026-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/cesm.70080","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cochrane Evidence Synthesis and Methods","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cesm.70080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction

Systematic reviews occupy a central position in evidence hierarchies, providing structured syntheses intended to inform clinical decision-making and health policy. However, the rapid expansion of artificial intelligence (AI) tools in literature searching, screening, data extraction, and manuscript drafting is transforming how these reviews are produced. Concurrently, the number of prospectively registered systematic reviews has grown substantially, with recent increases in PROSPERO registrations highlighting an accelerating output of evidence syntheses. While technological advances promise efficiency and scalability, they also raise concerns regarding methodological rigor, redundancy, and transparency.

Methods

This viewpoint argues that the current reporting and governance frameworks for systematic reviews remain largely anchored in pre-AI workflows.

Results

Ongoing updates to reporting standards, including PRISMA revisions, have yet to fully address key challenges introduced by AI-assisted methodologies, such as algorithmic bias, auditability, reproducibility limitations of proprietary models, and the need to document human oversight. The absence of explicit guidance for reporting AI use creates a critical transparency gap, potentially undermining confidence in systematic reviews and increasing the risk of superficial or duplicated syntheses.

Conclusion

We propose that the evidence-synthesis ecosystem requires urgent adaptation, including the development of a PRISMA-AI extension, strengthened metadata requirements in registries such as PROSPERO, and updated editorial policies for AI-assisted reviews. Safeguarding rigor in the age of automated science is essential to maintain the credibility and clinical utility of systematic reviews.

人工智能时代的系统评论:我们是否为了数量而牺牲了严谨性?
系统评价在证据层次中占据中心位置,提供结构化综合,旨在为临床决策和卫生政策提供信息。然而,人工智能(AI)工具在文献检索、筛选、数据提取和手稿起草方面的迅速发展正在改变这些综述的产生方式。与此同时,预期注册的系统评价的数量大幅增加,最近普洛斯佩罗注册的增加突出了证据合成的加速输出。虽然技术进步保证了效率和可扩展性,但它们也引起了对方法严谨性、冗余性和透明度的担忧。该观点认为,目前用于系统审查的报告和治理框架在很大程度上仍停留在人工智能之前的工作流程中。报告标准的持续更新,包括PRISMA修订,尚未完全解决人工智能辅助方法带来的关键挑战,如算法偏差、可审计性、专有模型的可重复性限制,以及记录人类监督的需要。缺乏关于报告人工智能使用的明确指导,造成了一个关键的透明度缺口,可能会破坏对系统评估的信心,并增加肤浅或重复合成的风险。我们认为,证据合成生态系统急需适应,包括开发prism - ai扩展,加强普洛斯佩罗等注册中心的元数据要求,以及更新人工智能辅助评论的编辑政策。在自动化科学时代,维护严谨性对于维护系统评价的可信度和临床效用至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书