{"title":"Analyzing Intervention Strategies Employed in Response to Automated Academic-Risk Identification: A Systematic Review","authors":"Augusto Schmidt;Cristian Cechinel;Emanuel Marques Queiroga;Tiago Primo;Vinicius Ramos;Andréa Sabedra Bordin;Rafael Ferreira Mello;Roberto Muñoz","doi":"10.1109/RITA.2025.3540161","DOIUrl":null,"url":null,"abstract":"Predicting in advance the likelihood of students failing a course or withdrawing from a degree program has emerged as one of the widely embraced applications of Learning Analytics. While the literature extensively addresses the identification of at-risk students, it often doesn’t evolve into actual interventions, focusing more on reporting experimental outcomes than on translating them into real-world impact. The goal of early identification is straightforward, empowering educators to intervene before actual failure or dropout, but not enough attention is paid to what happens after the students are flagged as at risk. Interventions like personalized feedback, automated alerts, and targeted support can be game-changers, reducing failure and dropout rates. However, as this paper shows, few studies actually dig into the effectiveness of these strategies or measure their impact on student outcomes. Even more striking is the lack of research targeting stakeholders beyond students, like educators, administrators, and curriculum designers, who play a key role in driving meaningful interventions. The paper explores recent literature on automated academic risk prediction, focusing on interventions in selected papers. Our findings highlight that only about 14% of studies propose actionable interventions, and even fewer implement them. Despite these challenges, we can see that a global momentum is building around Learning Analytics, and institutions are starting to tap into the potential of these tools. However, academic databases, loaded with valuable insights, remain massively underused. To move the field forward, we propose actionable strategies, like developing intervention frameworks that engage multiple stakeholders, creating standardized metrics for measuring success and expanding data sources to include both traditional academic systems and alternative datasets. By tackling these issues, this paper doesn’t just highlight what is missing; it offers a roadmap for researchers and practitioners alike, aiming to close the gap between prediction and action. It’s time to go beyond identifying risks and start making a real difference where it matters most.","PeriodicalId":38963,"journal":{"name":"Revista Iberoamericana de Tecnologias del Aprendizaje","volume":"20 ","pages":"77-85"},"PeriodicalIF":1.0000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Revista Iberoamericana de Tecnologias del Aprendizaje","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10879057/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Predicting in advance the likelihood of students failing a course or withdrawing from a degree program has emerged as one of the widely embraced applications of Learning Analytics. While the literature extensively addresses the identification of at-risk students, it often doesn’t evolve into actual interventions, focusing more on reporting experimental outcomes than on translating them into real-world impact. The goal of early identification is straightforward, empowering educators to intervene before actual failure or dropout, but not enough attention is paid to what happens after the students are flagged as at risk. Interventions like personalized feedback, automated alerts, and targeted support can be game-changers, reducing failure and dropout rates. However, as this paper shows, few studies actually dig into the effectiveness of these strategies or measure their impact on student outcomes. Even more striking is the lack of research targeting stakeholders beyond students, like educators, administrators, and curriculum designers, who play a key role in driving meaningful interventions. The paper explores recent literature on automated academic risk prediction, focusing on interventions in selected papers. Our findings highlight that only about 14% of studies propose actionable interventions, and even fewer implement them. Despite these challenges, we can see that a global momentum is building around Learning Analytics, and institutions are starting to tap into the potential of these tools. However, academic databases, loaded with valuable insights, remain massively underused. To move the field forward, we propose actionable strategies, like developing intervention frameworks that engage multiple stakeholders, creating standardized metrics for measuring success and expanding data sources to include both traditional academic systems and alternative datasets. By tackling these issues, this paper doesn’t just highlight what is missing; it offers a roadmap for researchers and practitioners alike, aiming to close the gap between prediction and action. It’s time to go beyond identifying risks and start making a real difference where it matters most.