Matthew Fuller-Tyszkiewicz, Allan Jones, Rajesh Vasa, Jacqui A. Macdonald, Camille Deane, Delyth Samuel, Tracy Evans-Whipp, Craig A. Olsson
{"title":"人工智能软件加速筛选生活系统评论","authors":"Matthew Fuller-Tyszkiewicz, Allan Jones, Rajesh Vasa, Jacqui A. Macdonald, Camille Deane, Delyth Samuel, Tracy Evans-Whipp, Craig A. Olsson","doi":"10.1007/s10567-025-00519-5","DOIUrl":null,"url":null,"abstract":"<p>Systematic and meta-analytic reviews provide gold-standard evidence but are static and outdate quickly. Here we provide performance data on a new software platform, LitQuest, that uses artificial intelligence technologies to (1) accelerate screening of titles and abstracts from library literature searches, and (2) provide a software solution for enabling living systematic reviews by maintaining a saved AI algorithm for updated searches. Performance testing was based on LitQuest data from seven systematic reviews. LitQuest <i>efficiency</i> was estimated as the proportion (%) of the total yield of an initial literature search (titles/abstracts) that needed human screening prior to reaching the in-built stop threshold. LitQuest algorithm <i>performance</i> was measured as work saved over sampling (WSS) for a certain recall. LitQuest <i>accuracy</i> was estimated as the proportion of incorrectly classified papers in the rejected pool, as determined by two independent human raters. On average, around 36% of the total yield of a literature search needed to be human screened prior to reaching the stop-point. However, this ranged from 22 to 53% depending on the complexity of language structure across papers included in specific reviews. Accuracy was 99% at an interrater reliability of 95%, and 0% of titles/abstracts were incorrectly assigned. Findings suggest that LitQuest can be a cost-effective and time-efficient solution to supporting living systematic reviews, particularly for rapidly developing areas of science. Further development of LitQuest is planned, including facilitated full-text data extraction and community-of-practice access to living systematic review findings.</p>","PeriodicalId":51399,"journal":{"name":"Clinical Child and Family Psychology Review","volume":"219 1","pages":""},"PeriodicalIF":5.5000,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence Software to Accelerate Screening for Living Systematic Reviews\",\"authors\":\"Matthew Fuller-Tyszkiewicz, Allan Jones, Rajesh Vasa, Jacqui A. Macdonald, Camille Deane, Delyth Samuel, Tracy Evans-Whipp, Craig A. Olsson\",\"doi\":\"10.1007/s10567-025-00519-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Systematic and meta-analytic reviews provide gold-standard evidence but are static and outdate quickly. Here we provide performance data on a new software platform, LitQuest, that uses artificial intelligence technologies to (1) accelerate screening of titles and abstracts from library literature searches, and (2) provide a software solution for enabling living systematic reviews by maintaining a saved AI algorithm for updated searches. Performance testing was based on LitQuest data from seven systematic reviews. LitQuest <i>efficiency</i> was estimated as the proportion (%) of the total yield of an initial literature search (titles/abstracts) that needed human screening prior to reaching the in-built stop threshold. LitQuest algorithm <i>performance</i> was measured as work saved over sampling (WSS) for a certain recall. LitQuest <i>accuracy</i> was estimated as the proportion of incorrectly classified papers in the rejected pool, as determined by two independent human raters. On average, around 36% of the total yield of a literature search needed to be human screened prior to reaching the stop-point. However, this ranged from 22 to 53% depending on the complexity of language structure across papers included in specific reviews. Accuracy was 99% at an interrater reliability of 95%, and 0% of titles/abstracts were incorrectly assigned. Findings suggest that LitQuest can be a cost-effective and time-efficient solution to supporting living systematic reviews, particularly for rapidly developing areas of science. Further development of LitQuest is planned, including facilitated full-text data extraction and community-of-practice access to living systematic review findings.</p>\",\"PeriodicalId\":51399,\"journal\":{\"name\":\"Clinical Child and Family Psychology Review\",\"volume\":\"219 1\",\"pages\":\"\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2025-04-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical Child and Family Psychology Review\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1007/s10567-025-00519-5\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, CLINICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Child and Family Psychology Review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1007/s10567-025-00519-5","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, CLINICAL","Score":null,"Total":0}
Artificial Intelligence Software to Accelerate Screening for Living Systematic Reviews
Systematic and meta-analytic reviews provide gold-standard evidence but are static and outdate quickly. Here we provide performance data on a new software platform, LitQuest, that uses artificial intelligence technologies to (1) accelerate screening of titles and abstracts from library literature searches, and (2) provide a software solution for enabling living systematic reviews by maintaining a saved AI algorithm for updated searches. Performance testing was based on LitQuest data from seven systematic reviews. LitQuest efficiency was estimated as the proportion (%) of the total yield of an initial literature search (titles/abstracts) that needed human screening prior to reaching the in-built stop threshold. LitQuest algorithm performance was measured as work saved over sampling (WSS) for a certain recall. LitQuest accuracy was estimated as the proportion of incorrectly classified papers in the rejected pool, as determined by two independent human raters. On average, around 36% of the total yield of a literature search needed to be human screened prior to reaching the stop-point. However, this ranged from 22 to 53% depending on the complexity of language structure across papers included in specific reviews. Accuracy was 99% at an interrater reliability of 95%, and 0% of titles/abstracts were incorrectly assigned. Findings suggest that LitQuest can be a cost-effective and time-efficient solution to supporting living systematic reviews, particularly for rapidly developing areas of science. Further development of LitQuest is planned, including facilitated full-text data extraction and community-of-practice access to living systematic review findings.
期刊介绍:
Editors-in-Chief: Dr. Ronald J. Prinz, University of South Carolina and Dr. Thomas H. Ollendick, Virginia Polytechnic Institute Clinical Child and Family Psychology Review is a quarterly, peer-reviewed journal that provides an international, interdisciplinary forum in which important and new developments in this field are identified and in-depth reviews on current thought and practices are published. The Journal publishes original research reviews, conceptual and theoretical papers, and related work in the broad area of the behavioral sciences that pertains to infants, children, adolescents, and families. Contributions originate from a wide array of disciplines including, but not limited to, psychology (e.g., clinical, community, developmental, family, school), medicine (e.g., family practice, pediatrics, psychiatry), public health, social work, and education. Topical content includes science and application and covers facets of etiology, assessment, description, treatment and intervention, prevention, methodology, and public policy. Submissions are by invitation only and undergo peer review. The Editors, in consultation with the Editorial Board, invite highly qualified experts to contribute original papers on topics of timely interest and significance.