Analyzing Intervention Strategies Employed in Response to Automated Academic-Risk Identification: A Systematic Review

IF 1 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Augusto Schmidt;Cristian Cechinel;Emanuel Marques Queiroga;Tiago Primo;Vinicius Ramos;Andréa Sabedra Bordin;Rafael Ferreira Mello;Roberto Muñoz
{"title":"Analyzing Intervention Strategies Employed in Response to Automated Academic-Risk Identification: A Systematic Review","authors":"Augusto Schmidt;Cristian Cechinel;Emanuel Marques Queiroga;Tiago Primo;Vinicius Ramos;Andréa Sabedra Bordin;Rafael Ferreira Mello;Roberto Muñoz","doi":"10.1109/RITA.2025.3540161","DOIUrl":null,"url":null,"abstract":"Predicting in advance the likelihood of students failing a course or withdrawing from a degree program has emerged as one of the widely embraced applications of Learning Analytics. While the literature extensively addresses the identification of at-risk students, it often doesn’t evolve into actual interventions, focusing more on reporting experimental outcomes than on translating them into real-world impact. The goal of early identification is straightforward, empowering educators to intervene before actual failure or dropout, but not enough attention is paid to what happens after the students are flagged as at risk. Interventions like personalized feedback, automated alerts, and targeted support can be game-changers, reducing failure and dropout rates. However, as this paper shows, few studies actually dig into the effectiveness of these strategies or measure their impact on student outcomes. Even more striking is the lack of research targeting stakeholders beyond students, like educators, administrators, and curriculum designers, who play a key role in driving meaningful interventions. The paper explores recent literature on automated academic risk prediction, focusing on interventions in selected papers. Our findings highlight that only about 14% of studies propose actionable interventions, and even fewer implement them. Despite these challenges, we can see that a global momentum is building around Learning Analytics, and institutions are starting to tap into the potential of these tools. However, academic databases, loaded with valuable insights, remain massively underused. To move the field forward, we propose actionable strategies, like developing intervention frameworks that engage multiple stakeholders, creating standardized metrics for measuring success and expanding data sources to include both traditional academic systems and alternative datasets. By tackling these issues, this paper doesn’t just highlight what is missing; it offers a roadmap for researchers and practitioners alike, aiming to close the gap between prediction and action. It’s time to go beyond identifying risks and start making a real difference where it matters most.","PeriodicalId":38963,"journal":{"name":"Revista Iberoamericana de Tecnologias del Aprendizaje","volume":"20 ","pages":"77-85"},"PeriodicalIF":1.0000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Revista Iberoamericana de Tecnologias del Aprendizaje","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10879057/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Predicting in advance the likelihood of students failing a course or withdrawing from a degree program has emerged as one of the widely embraced applications of Learning Analytics. While the literature extensively addresses the identification of at-risk students, it often doesn’t evolve into actual interventions, focusing more on reporting experimental outcomes than on translating them into real-world impact. The goal of early identification is straightforward, empowering educators to intervene before actual failure or dropout, but not enough attention is paid to what happens after the students are flagged as at risk. Interventions like personalized feedback, automated alerts, and targeted support can be game-changers, reducing failure and dropout rates. However, as this paper shows, few studies actually dig into the effectiveness of these strategies or measure their impact on student outcomes. Even more striking is the lack of research targeting stakeholders beyond students, like educators, administrators, and curriculum designers, who play a key role in driving meaningful interventions. The paper explores recent literature on automated academic risk prediction, focusing on interventions in selected papers. Our findings highlight that only about 14% of studies propose actionable interventions, and even fewer implement them. Despite these challenges, we can see that a global momentum is building around Learning Analytics, and institutions are starting to tap into the potential of these tools. However, academic databases, loaded with valuable insights, remain massively underused. To move the field forward, we propose actionable strategies, like developing intervention frameworks that engage multiple stakeholders, creating standardized metrics for measuring success and expanding data sources to include both traditional academic systems and alternative datasets. By tackling these issues, this paper doesn’t just highlight what is missing; it offers a roadmap for researchers and practitioners alike, aiming to close the gap between prediction and action. It’s time to go beyond identifying risks and start making a real difference where it matters most.
对学术风险自动识别的干预策略分析:系统回顾
提前预测学生不及格或退出学位课程的可能性已经成为学习分析广泛接受的应用之一。虽然文献广泛地讨论了对有风险学生的识别,但它往往没有演变成实际的干预措施,更多地关注于报告实验结果,而不是将其转化为现实世界的影响。早期识别的目标很直接,让教育工作者能够在真正的失败或辍学之前进行干预,但对学生被标记为有风险之后发生的事情却没有给予足够的关注。个性化反馈、自动警报和有针对性的支持等干预措施可以改变游戏规则,降低失败率和辍学率。然而,正如本文所示,很少有研究真正深入研究这些策略的有效性或衡量它们对学生成绩的影响。更令人吃惊的是,缺乏针对学生以外利益相关者的研究,如教育工作者、管理人员和课程设计师,他们在推动有意义的干预方面发挥着关键作用。本文探讨了最近关于自动化学术风险预测的文献,重点是选定论文中的干预措施。我们的研究结果强调,只有约14%的研究提出了可行的干预措施,而实施干预措施的研究就更少了。尽管存在这些挑战,但我们可以看到,围绕学习分析的全球势头正在形成,各机构也开始挖掘这些工具的潜力。然而,满载着宝贵见解的学术数据库仍未得到充分利用。为了推动该领域向前发展,我们提出了可操作的策略,如开发涉及多个利益相关者的干预框架,创建衡量成功的标准化指标,并扩展数据源以包括传统学术系统和替代数据集。通过解决这些问题,本文不仅突出了缺失的内容;它为研究人员和实践者提供了一个路线图,旨在缩小预测和行动之间的差距。现在是时候超越识别风险,开始在最重要的地方做出真正的改变了。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.30
自引率
0.00%
发文量
45
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信