Fairness of Academic Performance Prediction for the Distribution of Support Measures for Students: Differences in Perceived Fairness of Distributive Justice Norms
{"title":"Fairness of Academic Performance Prediction for the Distribution of Support Measures for Students: Differences in Perceived Fairness of Distributive Justice Norms","authors":"Marco Lünich, Birte Keller, Frank Marcinkowski","doi":"10.1007/s10758-023-09698-y","DOIUrl":null,"url":null,"abstract":"Abstract Artificial intelligence in higher education is becoming more prevalent as it promises improvements and acceleration of administrative processes concerning student support, aiming for increasing student success and graduation rates. For instance, Academic Performance Prediction (APP) provides individual feedback and serves as the foundation for distributing student support measures. However, the use of APP with all its challenges (e.g., inherent biases) significantly impacts the future prospects of young adults. Therefore, it is important to weigh the opportunities and risks of such systems carefully and involve affected students in the development phase. This study addresses students’ fairness perceptions of the distribution of support measures based on an APP system. First, we examine how students evaluate three different distributive justice norms, namely, equality , equity , and need . Second, we investigate whether fairness perceptions differ between APP based on human or algorithmic decision-making, and third, we address whether evaluations differ between students studying science, technology, engineering, and math (STEM) or social sciences, humanities, and the arts for people and the economy (SHAPE), respectively. To this end, we conducted a cross-sectional survey with a 2 $$\\times$$ <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mo>×</mml:mo> </mml:math> 3 factorial design among n = 1378 German students, in which we utilized the distinct distribution norms and decision-making agents as design factors. Our findings suggest that students prefer an equality-based distribution of support measures, and this preference is not influenced by whether APP is based on human or algorithmic decision-making. Moreover, the field of study does not influence the fairness perception, except that students of STEM subjects evaluate a distribution based on the need norm as more fair than students of SHAPE subjects. Based on these findings, higher education institutions should prioritize student-centric decisions when considering APP, weigh the actual need against potential risks, and establish continuous feedback through ongoing consultation with all stakeholders.","PeriodicalId":46366,"journal":{"name":"Technology Knowledge and Learning","volume":"17 9","pages":"0"},"PeriodicalIF":3.0000,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology Knowledge and Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10758-023-09698-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Artificial intelligence in higher education is becoming more prevalent as it promises improvements and acceleration of administrative processes concerning student support, aiming for increasing student success and graduation rates. For instance, Academic Performance Prediction (APP) provides individual feedback and serves as the foundation for distributing student support measures. However, the use of APP with all its challenges (e.g., inherent biases) significantly impacts the future prospects of young adults. Therefore, it is important to weigh the opportunities and risks of such systems carefully and involve affected students in the development phase. This study addresses students’ fairness perceptions of the distribution of support measures based on an APP system. First, we examine how students evaluate three different distributive justice norms, namely, equality , equity , and need . Second, we investigate whether fairness perceptions differ between APP based on human or algorithmic decision-making, and third, we address whether evaluations differ between students studying science, technology, engineering, and math (STEM) or social sciences, humanities, and the arts for people and the economy (SHAPE), respectively. To this end, we conducted a cross-sectional survey with a 2 $$\times$$ × 3 factorial design among n = 1378 German students, in which we utilized the distinct distribution norms and decision-making agents as design factors. Our findings suggest that students prefer an equality-based distribution of support measures, and this preference is not influenced by whether APP is based on human or algorithmic decision-making. Moreover, the field of study does not influence the fairness perception, except that students of STEM subjects evaluate a distribution based on the need norm as more fair than students of SHAPE subjects. Based on these findings, higher education institutions should prioritize student-centric decisions when considering APP, weigh the actual need against potential risks, and establish continuous feedback through ongoing consultation with all stakeholders.
期刊介绍:
Technology, Knowledge and Learning emphasizes the increased interest on context-aware adaptive and personalized digital learning environments. Rapid technological developments have led to new research challenges focusing on digital learning, gamification, automated assessment and learning analytics. These emerging systems aim to provide learning experiences delivered via online environments as well as mobile devices and tailored to the educational needs, the personal characteristics and the particular circumstances of the individual learner or a (massive) group of interconnected learners. Such diverse learning experiences in real-world and virtual situations generates big data which provides rich potential for in-depth intelligent analysis and adaptive feedback as well as scaffolds whenever the learner needs it. Novel manuscripts are welcome that account for how these new technologies and systems reconfigure learning experiences, assessment methodologies as well as future educational practices. Technology, Knowledge and Learning also publishes guest-edited themed special issues linked to the emerging field of educational technology.
Submissions can be empirical investigations, work in progress studies or emerging technology reports. Empirical investigations report quantitative or qualitative research demonstrating advances in digital learning, gamification, automated assessment or learning analytics. Work-in-progress studies provide early insights into leading projects or document progressions of excellent research within the field of digital learning, gamification, automated assessment or learning analytics. Emerging technology reports review new developments in educational technology by assessing the potentials for leading digital learning environments. Manuscripts submitted to Technology, Knowledge and Learning undergo a blind review process involving expert reviews and in-depth evaluations. Initial feedback is usually provided within eight weeks including in progress open-access abstracts and review snapshots.