Moral Machines or Tyranny of the Majority? A Systematic Review on Predictive Bias in Education

Lin Li, Lele Sha, Yuheng Li, Mladen Raković, Jia Rong, Srécko Joksimovíc, N. Selwyn, D. Gašević, Guanliang Chen
{"title":"Moral Machines or Tyranny of the Majority? A Systematic Review on Predictive Bias in Education","authors":"Lin Li, Lele Sha, Yuheng Li, Mladen Raković, Jia Rong, Srécko Joksimovíc, N. Selwyn, D. Gašević, Guanliang Chen","doi":"10.1145/3576050.3576119","DOIUrl":null,"url":null,"abstract":"Machine Learning (ML) techniques have been increasingly adopted to support various activities in education, including being applied in important contexts such as college admission and scholarship allocation. In addition to being accurate, the application of these techniques has to be fair, i.e., displaying no discrimination towards any group of stakeholders in education (mainly students and instructors) based on their protective attributes (e.g., gender and age). The past few years have witnessed an explosion of attention given to the predictive bias of ML techniques in education. Though certain endeavors have been made to detect and alleviate predictive bias in learning analytics, it is still hard for newcomers to penetrate. To address this, we systematically reviewed existing studies on predictive bias in education, and a total of 49 peer-reviewed empirical papers published after 2010 were included in this study. In particular, these papers were reviewed and summarized from the following three perspectives: (i) protective attributes, (ii) fairness measures and their applications in various educational tasks, and (iii) strategies for enhancing predictive fairness. These findings were summarized into recommendations to guide future endeavors in this strand of research, e.g., collecting and sharing more quality data containing protective attributes, developing fairness-enhancing approaches which do not require the explicit use of protective attributes, validating the effectiveness of fairness-enhancing on students and instructors in real-world settings.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"LAK23: 13th International Learning Analytics and Knowledge Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3576050.3576119","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Machine Learning (ML) techniques have been increasingly adopted to support various activities in education, including being applied in important contexts such as college admission and scholarship allocation. In addition to being accurate, the application of these techniques has to be fair, i.e., displaying no discrimination towards any group of stakeholders in education (mainly students and instructors) based on their protective attributes (e.g., gender and age). The past few years have witnessed an explosion of attention given to the predictive bias of ML techniques in education. Though certain endeavors have been made to detect and alleviate predictive bias in learning analytics, it is still hard for newcomers to penetrate. To address this, we systematically reviewed existing studies on predictive bias in education, and a total of 49 peer-reviewed empirical papers published after 2010 were included in this study. In particular, these papers were reviewed and summarized from the following three perspectives: (i) protective attributes, (ii) fairness measures and their applications in various educational tasks, and (iii) strategies for enhancing predictive fairness. These findings were summarized into recommendations to guide future endeavors in this strand of research, e.g., collecting and sharing more quality data containing protective attributes, developing fairness-enhancing approaches which do not require the explicit use of protective attributes, validating the effectiveness of fairness-enhancing on students and instructors in real-world settings.
道德机器还是多数人的暴政?对教育预测偏差的系统评价
机器学习(ML)技术已越来越多地用于支持各种教育活动,包括在大学录取和奖学金分配等重要背景下的应用。除了准确之外,这些技术的应用必须是公平的,也就是说,不能基于教育中的任何利益相关者群体(主要是学生和教师)的保护属性(例如,性别和年龄)而歧视他们。在过去的几年里,人们对机器学习技术在教育中的预测偏差的关注呈爆炸式增长。虽然已经做出了一些努力来检测和减轻学习分析中的预测偏差,但新手仍然很难渗透进来。为了解决这个问题,我们系统地回顾了现有的关于教育预测偏差的研究,并将2010年以后发表的49篇同行评议的实证论文纳入本研究。特别从以下三个方面对这些论文进行了回顾和总结:(1)保护属性;(2)公平措施及其在各种教育任务中的应用;(3)提高预测公平的策略。这些发现被总结为指导这一研究链未来努力的建议,例如,收集和共享更多包含保护属性的高质量数据,开发不需要明确使用保护属性的公平增强方法,验证在现实环境中对学生和教师公平增强的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信