{"title":"Automate the ‘boring bits’: An assessment of AI-assisted systematic review (AIASR)","authors":"Timothy Hampson , Kelly Cargos , Jim McKinley","doi":"10.1016/j.rmal.2025.100258","DOIUrl":null,"url":null,"abstract":"<div><div>Systematic review is a powerful tool for disseminating the findings of research, particularly in applied linguistics where we hope to provide insights for practising language teachers. Yet, systematic review is also often prohibitively time-consuming, particularly for small, underfunded teams or solo researchers. In this study, we explore the use of generative artificial intelligence to ease the burden of screening and organising papers. Our findings suggest that AI excels in some tasks, particularly when those tasks involve explicitly stated information, and struggles in others, particularly when information is more implicit. A comparison of generative artificial intelligence for filtering papers with ASReview, a popular non-generative tool, reveals trade-offs, with Generative AI being replicable and more efficient, but with concerns about accuracy. We conclude that generative artificial intelligence can be a useful tool for systematic review but requires rigorous validation before use. We conclude by emphasising the importance of testing AI for systematic review tasks and exploring how this can practically be achieved.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"4 3","pages":"Article 100258"},"PeriodicalIF":0.0000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Methods in Applied Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772766125000795","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Systematic review is a powerful tool for disseminating the findings of research, particularly in applied linguistics where we hope to provide insights for practising language teachers. Yet, systematic review is also often prohibitively time-consuming, particularly for small, underfunded teams or solo researchers. In this study, we explore the use of generative artificial intelligence to ease the burden of screening and organising papers. Our findings suggest that AI excels in some tasks, particularly when those tasks involve explicitly stated information, and struggles in others, particularly when information is more implicit. A comparison of generative artificial intelligence for filtering papers with ASReview, a popular non-generative tool, reveals trade-offs, with Generative AI being replicable and more efficient, but with concerns about accuracy. We conclude that generative artificial intelligence can be a useful tool for systematic review but requires rigorous validation before use. We conclude by emphasising the importance of testing AI for systematic review tasks and exploring how this can practically be achieved.