偏见何时以及如何渗入:加强公平教育预测分析的消除偏见方法

IF 8.1 1区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Lin Li, Namrata Srivastava, Jia Rong, Quanlong Guan, Dragan Gašević, Guanliang Chen
{"title":"偏见何时以及如何渗入:加强公平教育预测分析的消除偏见方法","authors":"Lin Li,&nbsp;Namrata Srivastava,&nbsp;Jia Rong,&nbsp;Quanlong Guan,&nbsp;Dragan Gašević,&nbsp;Guanliang Chen","doi":"10.1111/bjet.13575","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <p>The use of predictive analytics powered by machine learning (ML) to model educational data has increasingly been identified to exhibit bias towards marginalized populations, prompting the need for more equitable applications of these techniques. To tackle bias that emerges in training data or models at different stages of the ML modelling pipeline, numerous debiasing approaches have been proposed. Yet, research into state-of-the-art techniques for effectively employing these approaches to enhance fairness in educational predictive scenarios remains limited. Prior studies often focused on mitigating bias from a single source at a specific stage of model construction within narrowly defined scenarios, overlooking the complexities of bias originating from multiple sources across various stages. Moreover, these approaches were often evaluated using typical threshold-dependent fairness metrics, which fail to account for real-world educational scenarios where thresholds are typically unknown before evaluation. To bridge these gaps, this study systematically examined a total of 28 representative debiasing approaches, categorized by the sources of bias and the stage they targeted, for two critical educational predictive tasks, namely forum post classification and student career prediction. Both tasks involve a two-phase modelling process where features learned from upstream models in the first phase are fed into classical ML models for final predictions, which is a common yet under-explored setting for educational data modelling. The study observed that addressing local stereotypical bias, label bias or proxy discrimination in training data, as well as imposing fairness constraints on models, can effectively enhance predictive fairness. But their efficacy was often compromised when features from upstream models were inherently biased. Beyond that, this study proposes two novel strategies, namely Multi-Stage and Multi-Source debiasing to integrate existing approaches. These strategies demonstrated substantial improvements in mitigating unfairness, underscoring the importance of unified approaches capable of addressing biases from various sources across multiple stages.</p>\n </section>\n \n <section>\n \n <div>\n \n <div>\n \n <h3>Practitioner notes</h3>\n <p>What is already known about this topic\n\n </p><ul>\n \n <li>Predictive analytics for educational data modelling often exhibit bias against students from certain demographic groups based on sensitive attributes.</li>\n \n <li>Bias can emerge in training data or models at different time points of the ML modelling pipeline, resulting in unfair final predictions.</li>\n \n <li>Numerous debiasing approaches have been developed to tackle bias at different stages, including pre-processing training data, in-processing models, and post-processing predicted outcomes or trained models.</li>\n </ul>\n <p>What this paper adds\n\n </p><ul>\n \n <li>A systematic evaluation of 28 state-of-the-art debiasing approaches covering multiple sources of biases and multiple stages across two different educational predictive scenarios, identifying leading sources of data biases contributing to predictive unfairness.</li>\n \n <li>Further enhancing predictive fairness with proposed debiasing strategies considering the multi-source and multi-stage characteristics of biases.</li>\n \n <li>Revealing potential risks of debiasing focused on a single sensitive attribute.</li>\n </ul>\n <p>Implications for practitioners\n\n </p><ul>\n \n <li>Pre-processing approaches, particularly those addressing stereotypical bias, label bias and proxy discrimination, are generally effective for improving fairness in educational predictions. Re-weighing methods are especially useful for smaller datasets to tackle stereotypical bias.</li>\n \n <li>When dealing with two-phase modelling, biases inherently encoded in the features generated from upstream models might not be effectively addressed by debiasing approaches applied to downstream models.</li>\n \n <li>Combining debiasing approaches to tackle multiple sources of biases across multiple stages significantly enhances predictive fairness.</li>\n </ul>\n </div>\n </div>\n </section>\n </div>","PeriodicalId":48315,"journal":{"name":"British Journal of Educational Technology","volume":"56 6","pages":"2478-2501"},"PeriodicalIF":8.1000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://bera-journals.onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13575","citationCount":"0","resultStr":"{\"title\":\"When and how biases seep in: Enhancing debiasing approaches for fair educational predictive analytics\",\"authors\":\"Lin Li,&nbsp;Namrata Srivastava,&nbsp;Jia Rong,&nbsp;Quanlong Guan,&nbsp;Dragan Gašević,&nbsp;Guanliang Chen\",\"doi\":\"10.1111/bjet.13575\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <p>The use of predictive analytics powered by machine learning (ML) to model educational data has increasingly been identified to exhibit bias towards marginalized populations, prompting the need for more equitable applications of these techniques. To tackle bias that emerges in training data or models at different stages of the ML modelling pipeline, numerous debiasing approaches have been proposed. Yet, research into state-of-the-art techniques for effectively employing these approaches to enhance fairness in educational predictive scenarios remains limited. Prior studies often focused on mitigating bias from a single source at a specific stage of model construction within narrowly defined scenarios, overlooking the complexities of bias originating from multiple sources across various stages. Moreover, these approaches were often evaluated using typical threshold-dependent fairness metrics, which fail to account for real-world educational scenarios where thresholds are typically unknown before evaluation. To bridge these gaps, this study systematically examined a total of 28 representative debiasing approaches, categorized by the sources of bias and the stage they targeted, for two critical educational predictive tasks, namely forum post classification and student career prediction. Both tasks involve a two-phase modelling process where features learned from upstream models in the first phase are fed into classical ML models for final predictions, which is a common yet under-explored setting for educational data modelling. The study observed that addressing local stereotypical bias, label bias or proxy discrimination in training data, as well as imposing fairness constraints on models, can effectively enhance predictive fairness. But their efficacy was often compromised when features from upstream models were inherently biased. Beyond that, this study proposes two novel strategies, namely Multi-Stage and Multi-Source debiasing to integrate existing approaches. These strategies demonstrated substantial improvements in mitigating unfairness, underscoring the importance of unified approaches capable of addressing biases from various sources across multiple stages.</p>\\n </section>\\n \\n <section>\\n \\n <div>\\n \\n <div>\\n \\n <h3>Practitioner notes</h3>\\n <p>What is already known about this topic\\n\\n </p><ul>\\n \\n <li>Predictive analytics for educational data modelling often exhibit bias against students from certain demographic groups based on sensitive attributes.</li>\\n \\n <li>Bias can emerge in training data or models at different time points of the ML modelling pipeline, resulting in unfair final predictions.</li>\\n \\n <li>Numerous debiasing approaches have been developed to tackle bias at different stages, including pre-processing training data, in-processing models, and post-processing predicted outcomes or trained models.</li>\\n </ul>\\n <p>What this paper adds\\n\\n </p><ul>\\n \\n <li>A systematic evaluation of 28 state-of-the-art debiasing approaches covering multiple sources of biases and multiple stages across two different educational predictive scenarios, identifying leading sources of data biases contributing to predictive unfairness.</li>\\n \\n <li>Further enhancing predictive fairness with proposed debiasing strategies considering the multi-source and multi-stage characteristics of biases.</li>\\n \\n <li>Revealing potential risks of debiasing focused on a single sensitive attribute.</li>\\n </ul>\\n <p>Implications for practitioners\\n\\n </p><ul>\\n \\n <li>Pre-processing approaches, particularly those addressing stereotypical bias, label bias and proxy discrimination, are generally effective for improving fairness in educational predictions. Re-weighing methods are especially useful for smaller datasets to tackle stereotypical bias.</li>\\n \\n <li>When dealing with two-phase modelling, biases inherently encoded in the features generated from upstream models might not be effectively addressed by debiasing approaches applied to downstream models.</li>\\n \\n <li>Combining debiasing approaches to tackle multiple sources of biases across multiple stages significantly enhances predictive fairness.</li>\\n </ul>\\n </div>\\n </div>\\n </section>\\n </div>\",\"PeriodicalId\":48315,\"journal\":{\"name\":\"British Journal of Educational Technology\",\"volume\":\"56 6\",\"pages\":\"2478-2501\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2025-03-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://bera-journals.onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13575\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Educational Technology\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13575\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13575","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

越来越多的人认为,使用机器学习(ML)驱动的预测分析来模拟教育数据,会对边缘人群产生偏见,这促使人们需要更公平地应用这些技术。为了解决在机器学习建模管道的不同阶段训练数据或模型中出现的偏差,已经提出了许多消除偏差的方法。然而,对有效利用这些方法提高教育预测情景公平性的最先进技术的研究仍然有限。先前的研究通常侧重于在狭义的场景中减轻模型构建特定阶段的单一来源的偏差,而忽略了在不同阶段源自多个来源的偏差的复杂性。此外,这些方法通常使用典型的阈值相关公平指标进行评估,这无法解释现实世界的教育场景,因为在评估之前阈值通常是未知的。为了弥补这些差距,本研究系统地检查了28种具有代表性的去偏见方法,根据偏见的来源和他们所针对的阶段进行了分类,用于两个关键的教育预测任务,即论坛帖子分类和学生职业预测。这两项任务都涉及两阶段的建模过程,其中第一阶段从上游模型中学习到的特征被馈送到经典ML模型中进行最终预测,这是教育数据建模的常见但尚未充分探索的设置。研究发现,解决训练数据中的局部刻板偏见、标签偏见或代理歧视,以及对模型施加公平性约束,可以有效提高预测公平性。但是,当来自上游模型的特征存在固有偏见时,它们的功效往往会受到损害。除此之外,本研究提出了两种新的策略,即多阶段和多源去偏,以整合现有的方法。这些战略在减轻不公平方面取得了重大进展,强调了能够在多个阶段解决各种来源的偏见的统一方法的重要性。教育数据建模的预测分析通常会基于敏感属性对某些人口统计群体的学生表现出偏见。在机器学习建模管道的不同时间点,训练数据或模型可能会出现偏差,从而导致不公平的最终预测。为了解决不同阶段的偏差,已经开发了许多消除偏差的方法,包括预处理训练数据、处理中模型和后处理预测结果或训练模型。对28种最先进的去偏方法进行了系统评估,这些方法涵盖了两种不同教育预测情景中的多个偏差来源和多个阶段,确定了导致预测不公平的数据偏差的主要来源。考虑到偏差的多源和多阶段特征,提出了进一步增强预测公平性的去偏策略。揭示专注于单一敏感属性的去偏见的潜在风险。预处理方法,特别是那些解决刻板偏见、标签偏见和代理歧视的方法,通常对提高教育预测的公平性有效。重新加权方法对于较小的数据集特别有用,可以解决刻板偏见。在处理两阶段建模时,上游模型生成的特征中固有编码的偏差可能无法通过应用于下游模型的去偏方法有效地解决。结合消除偏见的方法来解决跨多个阶段的多个偏见来源,显著提高了预测的公平性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

When and how biases seep in: Enhancing debiasing approaches for fair educational predictive analytics

When and how biases seep in: Enhancing debiasing approaches for fair educational predictive analytics

The use of predictive analytics powered by machine learning (ML) to model educational data has increasingly been identified to exhibit bias towards marginalized populations, prompting the need for more equitable applications of these techniques. To tackle bias that emerges in training data or models at different stages of the ML modelling pipeline, numerous debiasing approaches have been proposed. Yet, research into state-of-the-art techniques for effectively employing these approaches to enhance fairness in educational predictive scenarios remains limited. Prior studies often focused on mitigating bias from a single source at a specific stage of model construction within narrowly defined scenarios, overlooking the complexities of bias originating from multiple sources across various stages. Moreover, these approaches were often evaluated using typical threshold-dependent fairness metrics, which fail to account for real-world educational scenarios where thresholds are typically unknown before evaluation. To bridge these gaps, this study systematically examined a total of 28 representative debiasing approaches, categorized by the sources of bias and the stage they targeted, for two critical educational predictive tasks, namely forum post classification and student career prediction. Both tasks involve a two-phase modelling process where features learned from upstream models in the first phase are fed into classical ML models for final predictions, which is a common yet under-explored setting for educational data modelling. The study observed that addressing local stereotypical bias, label bias or proxy discrimination in training data, as well as imposing fairness constraints on models, can effectively enhance predictive fairness. But their efficacy was often compromised when features from upstream models were inherently biased. Beyond that, this study proposes two novel strategies, namely Multi-Stage and Multi-Source debiasing to integrate existing approaches. These strategies demonstrated substantial improvements in mitigating unfairness, underscoring the importance of unified approaches capable of addressing biases from various sources across multiple stages.

Practitioner notes

What is already known about this topic

  • Predictive analytics for educational data modelling often exhibit bias against students from certain demographic groups based on sensitive attributes.
  • Bias can emerge in training data or models at different time points of the ML modelling pipeline, resulting in unfair final predictions.
  • Numerous debiasing approaches have been developed to tackle bias at different stages, including pre-processing training data, in-processing models, and post-processing predicted outcomes or trained models.

What this paper adds

  • A systematic evaluation of 28 state-of-the-art debiasing approaches covering multiple sources of biases and multiple stages across two different educational predictive scenarios, identifying leading sources of data biases contributing to predictive unfairness.
  • Further enhancing predictive fairness with proposed debiasing strategies considering the multi-source and multi-stage characteristics of biases.
  • Revealing potential risks of debiasing focused on a single sensitive attribute.

Implications for practitioners

  • Pre-processing approaches, particularly those addressing stereotypical bias, label bias and proxy discrimination, are generally effective for improving fairness in educational predictions. Re-weighing methods are especially useful for smaller datasets to tackle stereotypical bias.
  • When dealing with two-phase modelling, biases inherently encoded in the features generated from upstream models might not be effectively addressed by debiasing approaches applied to downstream models.
  • Combining debiasing approaches to tackle multiple sources of biases across multiple stages significantly enhances predictive fairness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
British Journal of Educational Technology
British Journal of Educational Technology EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
15.60
自引率
4.50%
发文量
111
期刊介绍: BJET is a primary source for academics and professionals in the fields of digital educational and training technology throughout the world. The Journal is published by Wiley on behalf of The British Educational Research Association (BERA). It publishes theoretical perspectives, methodological developments and high quality empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信