人工智能软件加速筛选生活系统评论

IF 5.5 1区 心理学 Q1 PSYCHOLOGY, CLINICAL
Matthew Fuller-Tyszkiewicz, Allan Jones, Rajesh Vasa, Jacqui A. Macdonald, Camille Deane, Delyth Samuel, Tracy Evans-Whipp, Craig A. Olsson
{"title":"人工智能软件加速筛选生活系统评论","authors":"Matthew Fuller-Tyszkiewicz, Allan Jones, Rajesh Vasa, Jacqui A. Macdonald, Camille Deane, Delyth Samuel, Tracy Evans-Whipp, Craig A. Olsson","doi":"10.1007/s10567-025-00519-5","DOIUrl":null,"url":null,"abstract":"<p>Systematic and meta-analytic reviews provide gold-standard evidence but are static and outdate quickly. Here we provide performance data on a new software platform, LitQuest, that uses artificial intelligence technologies to (1) accelerate screening of titles and abstracts from library literature searches, and (2) provide a software solution for enabling living systematic reviews by maintaining a saved AI algorithm for updated searches. Performance testing was based on LitQuest data from seven systematic reviews. LitQuest <i>efficiency</i> was estimated as the proportion (%) of the total yield of an initial literature search (titles/abstracts) that needed human screening prior to reaching the in-built stop threshold. LitQuest algorithm <i>performance</i> was measured as work saved over sampling (WSS) for a certain recall. LitQuest <i>accuracy</i> was estimated as the proportion of incorrectly classified papers in the rejected pool, as determined by two independent human raters. On average, around 36% of the total yield of a literature search needed to be human screened prior to reaching the stop-point. However, this ranged from 22 to 53% depending on the complexity of language structure across papers included in specific reviews. Accuracy was 99% at an interrater reliability of 95%, and 0% of titles/abstracts were incorrectly assigned. Findings suggest that LitQuest can be a cost-effective and time-efficient solution to supporting living systematic reviews, particularly for rapidly developing areas of science. Further development of LitQuest is planned, including facilitated full-text data extraction and community-of-practice access to living systematic review findings.</p>","PeriodicalId":51399,"journal":{"name":"Clinical Child and Family Psychology Review","volume":"219 1","pages":""},"PeriodicalIF":5.5000,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence Software to Accelerate Screening for Living Systematic Reviews\",\"authors\":\"Matthew Fuller-Tyszkiewicz, Allan Jones, Rajesh Vasa, Jacqui A. Macdonald, Camille Deane, Delyth Samuel, Tracy Evans-Whipp, Craig A. Olsson\",\"doi\":\"10.1007/s10567-025-00519-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Systematic and meta-analytic reviews provide gold-standard evidence but are static and outdate quickly. Here we provide performance data on a new software platform, LitQuest, that uses artificial intelligence technologies to (1) accelerate screening of titles and abstracts from library literature searches, and (2) provide a software solution for enabling living systematic reviews by maintaining a saved AI algorithm for updated searches. Performance testing was based on LitQuest data from seven systematic reviews. LitQuest <i>efficiency</i> was estimated as the proportion (%) of the total yield of an initial literature search (titles/abstracts) that needed human screening prior to reaching the in-built stop threshold. LitQuest algorithm <i>performance</i> was measured as work saved over sampling (WSS) for a certain recall. LitQuest <i>accuracy</i> was estimated as the proportion of incorrectly classified papers in the rejected pool, as determined by two independent human raters. On average, around 36% of the total yield of a literature search needed to be human screened prior to reaching the stop-point. However, this ranged from 22 to 53% depending on the complexity of language structure across papers included in specific reviews. Accuracy was 99% at an interrater reliability of 95%, and 0% of titles/abstracts were incorrectly assigned. Findings suggest that LitQuest can be a cost-effective and time-efficient solution to supporting living systematic reviews, particularly for rapidly developing areas of science. Further development of LitQuest is planned, including facilitated full-text data extraction and community-of-practice access to living systematic review findings.</p>\",\"PeriodicalId\":51399,\"journal\":{\"name\":\"Clinical Child and Family Psychology Review\",\"volume\":\"219 1\",\"pages\":\"\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2025-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical Child and Family Psychology Review\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1007/s10567-025-00519-5\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, CLINICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Child and Family Psychology Review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1007/s10567-025-00519-5","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, CLINICAL","Score":null,"Total":0}
引用次数: 0

摘要

系统的和元分析的评论提供了金标准的证据,但是是静态的,而且很快就过时了。在这里,我们提供了一个新的软件平台LitQuest的性能数据,该平台使用人工智能技术(1)加速对图书馆文献搜索中的标题和摘要的筛选,(2)通过维护保存的人工智能算法为更新的搜索提供一个软件解决方案,从而实现实时系统评论。性能测试基于七个系统审查的LitQuest数据。LitQuest的效率估计为在达到内置停止阈值之前需要人工筛选的初始文献搜索(标题/摘要)的总产量的比例(%)。LitQuest算法的性能以在一定召回率下节省的采样工作(WSS)来衡量。LitQuest的准确性估计为被拒绝的论文中被错误分类的论文的比例,由两个独立的人类评分者确定。平均而言,在达到停止点之前,大约36%的文献搜索总量需要进行人工筛选。然而,根据特定评论中包含的论文的语言结构的复杂性,这一比例从22%到53%不等。准确率为99%,互译信度为95%,0%的标题/摘要被错误分配。研究结果表明,LitQuest可以是一种成本效益和时间效率高的解决方案,以支持活体系统评价,特别是对于快速发展的科学领域。计划进一步开发LitQuest,包括便利的全文数据提取和社区实践访问实时系统评价结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence Software to Accelerate Screening for Living Systematic Reviews

Systematic and meta-analytic reviews provide gold-standard evidence but are static and outdate quickly. Here we provide performance data on a new software platform, LitQuest, that uses artificial intelligence technologies to (1) accelerate screening of titles and abstracts from library literature searches, and (2) provide a software solution for enabling living systematic reviews by maintaining a saved AI algorithm for updated searches. Performance testing was based on LitQuest data from seven systematic reviews. LitQuest efficiency was estimated as the proportion (%) of the total yield of an initial literature search (titles/abstracts) that needed human screening prior to reaching the in-built stop threshold. LitQuest algorithm performance was measured as work saved over sampling (WSS) for a certain recall. LitQuest accuracy was estimated as the proportion of incorrectly classified papers in the rejected pool, as determined by two independent human raters. On average, around 36% of the total yield of a literature search needed to be human screened prior to reaching the stop-point. However, this ranged from 22 to 53% depending on the complexity of language structure across papers included in specific reviews. Accuracy was 99% at an interrater reliability of 95%, and 0% of titles/abstracts were incorrectly assigned. Findings suggest that LitQuest can be a cost-effective and time-efficient solution to supporting living systematic reviews, particularly for rapidly developing areas of science. Further development of LitQuest is planned, including facilitated full-text data extraction and community-of-practice access to living systematic review findings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.50
自引率
4.30%
发文量
45
期刊介绍: Editors-in-Chief: Dr. Ronald J. Prinz, University of South Carolina and Dr. Thomas H. Ollendick, Virginia Polytechnic Institute Clinical Child and Family Psychology Review is a quarterly, peer-reviewed journal that provides an international, interdisciplinary forum in which important and new developments in this field are identified and in-depth reviews on current thought and practices are published. The Journal publishes original research reviews, conceptual and theoretical papers, and related work in the broad area of the behavioral sciences that pertains to infants, children, adolescents, and families. Contributions originate from a wide array of disciplines including, but not limited to, psychology (e.g., clinical, community, developmental, family, school), medicine (e.g., family practice, pediatrics, psychiatry), public health, social work, and education. Topical content includes science and application and covers facets of etiology, assessment, description, treatment and intervention, prevention, methodology, and public policy. Submissions are by invitation only and undergo peer review. The Editors, in consultation with the Editorial Board, invite highly qualified experts to contribute original papers on topics of timely interest and significance.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信