Shen Ba , Lan Yang , Zi Yan , Chee Kit Looi , Dragan Gašević
{"title":"Unraveling the mechanisms and effectiveness of AI-assisted feedback in education: A systematic literature review","authors":"Shen Ba , Lan Yang , Zi Yan , Chee Kit Looi , Dragan Gašević","doi":"10.1016/j.caeo.2025.100284","DOIUrl":null,"url":null,"abstract":"<div><div>Rapid advancements in Artificial Intelligence (AI) have prompted growing interest in leveraging AI for educational feedback. Yet, the centrality of the learner in this process is often overshadowed by technological excitement, and a broad understanding of AI-assisted feedback (AIFB) in education remains evolving. To address this gap, we conducted a systematic review of 129 peer-reviewed journal articles (2014–2023) based on widely used AI-related search terms to examine how AI, especially generative AI, supports feedback mechanisms and influences learner perceptions, actions, and outcomes. Our analysis identified a sharp rise in AIFB research after 2018, driven by modern large language models. We found that AI tools flexibly cater to multiple feedback foci (task, process, self-regulation, and self) and complexity levels (basic, intermediate, and elaborated). Our findings demonstrate that AIFB can effectively enhance targeted learning outcomes. By employing a transparent and field-aligned methodology, we synthesized recent advances and offers actionable insights for both research and practice. While the focus on widely recognized AI-related search terms ensures strong comparability and relevance, some specialized subfields (e.g., Automated Writing Evaluation), are less prominent in this synthesis. The study also highlights the ongoing need for clearer reporting of underlying AI algorithms. Building on these findings, we propose an original conceptual model that synthesizes current progress and offers a roadmap for future explorations. By illuminating the affordances and constraints of AIFB, we highlight the necessity for transparent methodological reporting and underscores the importance of integrating pedagogical and technological insights to promote meaningful, learner-centered feedback.</div></div>","PeriodicalId":100322,"journal":{"name":"Computers and Education Open","volume":"9 ","pages":"Article 100284"},"PeriodicalIF":5.7000,"publicationDate":"2025-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666557325000436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Rapid advancements in Artificial Intelligence (AI) have prompted growing interest in leveraging AI for educational feedback. Yet, the centrality of the learner in this process is often overshadowed by technological excitement, and a broad understanding of AI-assisted feedback (AIFB) in education remains evolving. To address this gap, we conducted a systematic review of 129 peer-reviewed journal articles (2014–2023) based on widely used AI-related search terms to examine how AI, especially generative AI, supports feedback mechanisms and influences learner perceptions, actions, and outcomes. Our analysis identified a sharp rise in AIFB research after 2018, driven by modern large language models. We found that AI tools flexibly cater to multiple feedback foci (task, process, self-regulation, and self) and complexity levels (basic, intermediate, and elaborated). Our findings demonstrate that AIFB can effectively enhance targeted learning outcomes. By employing a transparent and field-aligned methodology, we synthesized recent advances and offers actionable insights for both research and practice. While the focus on widely recognized AI-related search terms ensures strong comparability and relevance, some specialized subfields (e.g., Automated Writing Evaluation), are less prominent in this synthesis. The study also highlights the ongoing need for clearer reporting of underlying AI algorithms. Building on these findings, we propose an original conceptual model that synthesizes current progress and offers a roadmap for future explorations. By illuminating the affordances and constraints of AIFB, we highlight the necessity for transparent methodological reporting and underscores the importance of integrating pedagogical and technological insights to promote meaningful, learner-centered feedback.