调查人工智能预测系统对数学学习的感知公平性:一项针对大学生的混合方法研究

IF 6.8 1区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Yukyeong Song , Chenglu Li , Wanli Xing , Bailing Lyu , Wangda Zhu
{"title":"调查人工智能预测系统对数学学习的感知公平性:一项针对大学生的混合方法研究","authors":"Yukyeong Song ,&nbsp;Chenglu Li ,&nbsp;Wanli Xing ,&nbsp;Bailing Lyu ,&nbsp;Wangda Zhu","doi":"10.1016/j.iheduc.2025.101000","DOIUrl":null,"url":null,"abstract":"<div><div>Entities such as governments and universities have begun using AI for algorithmic decision-making that impacts people's lives. Despite their known benefits, such as efficiency, the public has raised concerns about the fairness of AI's decision-making. Here, the concept of perceived fairness, defined as people's emotional, cognitive, and behavioral responses toward the justice of the AI system, has been widely discussed as one of the important factors in determining technology acceptance. In the field of AI in education, students are among the biggest stakeholders; thus, it is important to consider students' perceived fairness of AI decision-making systems to gauge technology acceptance. This study adopted an explanatory sequential mixed-method research design involving 428 college students to investigate the factors that impact students' perceived fairness of AI's pass-or-fail prediction decisions in the context of math learning and suggest ways to improve the perceived fairness based on students' voices. The findings suggest that students who received a favorable prediction outcome (i.e., pass), who were presented with a system that had a lower algorithmic bias and higher transparency, who major(ed) in STEM (vs. non-STEM), who have higher math anxiety, and who received the outcome that matches their math knowledge level (i.e., accurate) tend to report a higher level of perceived fairness for the AI's prediction decisions. Interesting interaction effects were also found regarding decision-making, students' math anxiety and knowledge, and the outcome's favorability on students' perceived fairness. Qualitative thematic analysis revealed students' strong desire for transparency with guidance, explainability, and interactive communication with the AI system, as well as constructive feedback and emotional support. This study contributes to the development of a justice theory in the era of AI and suggests practical design implications for AI systems and communication strategies with AI systems in education.</div></div>","PeriodicalId":48186,"journal":{"name":"Internet and Higher Education","volume":"65 ","pages":"Article 101000"},"PeriodicalIF":6.8000,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating perceived fairness of AI prediction system for math learning: A mixed-methods study with college students\",\"authors\":\"Yukyeong Song ,&nbsp;Chenglu Li ,&nbsp;Wanli Xing ,&nbsp;Bailing Lyu ,&nbsp;Wangda Zhu\",\"doi\":\"10.1016/j.iheduc.2025.101000\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Entities such as governments and universities have begun using AI for algorithmic decision-making that impacts people's lives. Despite their known benefits, such as efficiency, the public has raised concerns about the fairness of AI's decision-making. Here, the concept of perceived fairness, defined as people's emotional, cognitive, and behavioral responses toward the justice of the AI system, has been widely discussed as one of the important factors in determining technology acceptance. In the field of AI in education, students are among the biggest stakeholders; thus, it is important to consider students' perceived fairness of AI decision-making systems to gauge technology acceptance. This study adopted an explanatory sequential mixed-method research design involving 428 college students to investigate the factors that impact students' perceived fairness of AI's pass-or-fail prediction decisions in the context of math learning and suggest ways to improve the perceived fairness based on students' voices. The findings suggest that students who received a favorable prediction outcome (i.e., pass), who were presented with a system that had a lower algorithmic bias and higher transparency, who major(ed) in STEM (vs. non-STEM), who have higher math anxiety, and who received the outcome that matches their math knowledge level (i.e., accurate) tend to report a higher level of perceived fairness for the AI's prediction decisions. Interesting interaction effects were also found regarding decision-making, students' math anxiety and knowledge, and the outcome's favorability on students' perceived fairness. Qualitative thematic analysis revealed students' strong desire for transparency with guidance, explainability, and interactive communication with the AI system, as well as constructive feedback and emotional support. This study contributes to the development of a justice theory in the era of AI and suggests practical design implications for AI systems and communication strategies with AI systems in education.</div></div>\",\"PeriodicalId\":48186,\"journal\":{\"name\":\"Internet and Higher Education\",\"volume\":\"65 \",\"pages\":\"Article 101000\"},\"PeriodicalIF\":6.8000,\"publicationDate\":\"2025-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet and Higher Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1096751625000090\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet and Higher Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1096751625000090","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

政府和大学等实体已经开始使用人工智能进行影响人们生活的算法决策。尽管人工智能有效率等众所周知的好处,但公众对人工智能决策的公正性表示担忧。在这里,感知公平的概念被定义为人们对人工智能系统的公平性的情感、认知和行为反应,作为决定技术接受度的重要因素之一被广泛讨论。在人工智能教育领域,学生是最大的利益相关者之一;因此,重要的是要考虑学生对人工智能决策系统的公平感,以衡量技术的接受程度。本研究采用解释性顺序混合方法研究设计,涉及428名大学生,探讨影响学生在数学学习情境下对AI预测成败决策感知公平性的因素,并根据学生的声音提出提高感知公平性的方法。研究结果表明,获得有利预测结果(即通过)的学生,获得具有较低算法偏差和较高透明度的系统的学生,主修STEM(相对于非STEM)的学生,具有较高的数学焦虑的学生,以及获得与其数学知识水平相匹配的结果(即准确)的学生倾向于报告更高水平的人工智能预测决策的感知公平性。在决策、学生的数学焦虑和知识以及结果对学生感知公平的好感度方面也发现了有趣的交互效应。定性专题分析显示,学生对透明的指导、可解释性、与AI系统的互动交流,以及建设性的反馈和情感支持有着强烈的渴望。这项研究有助于人工智能时代正义理论的发展,并为人工智能系统和教育中与人工智能系统的沟通策略提供了实际的设计启示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Investigating perceived fairness of AI prediction system for math learning: A mixed-methods study with college students
Entities such as governments and universities have begun using AI for algorithmic decision-making that impacts people's lives. Despite their known benefits, such as efficiency, the public has raised concerns about the fairness of AI's decision-making. Here, the concept of perceived fairness, defined as people's emotional, cognitive, and behavioral responses toward the justice of the AI system, has been widely discussed as one of the important factors in determining technology acceptance. In the field of AI in education, students are among the biggest stakeholders; thus, it is important to consider students' perceived fairness of AI decision-making systems to gauge technology acceptance. This study adopted an explanatory sequential mixed-method research design involving 428 college students to investigate the factors that impact students' perceived fairness of AI's pass-or-fail prediction decisions in the context of math learning and suggest ways to improve the perceived fairness based on students' voices. The findings suggest that students who received a favorable prediction outcome (i.e., pass), who were presented with a system that had a lower algorithmic bias and higher transparency, who major(ed) in STEM (vs. non-STEM), who have higher math anxiety, and who received the outcome that matches their math knowledge level (i.e., accurate) tend to report a higher level of perceived fairness for the AI's prediction decisions. Interesting interaction effects were also found regarding decision-making, students' math anxiety and knowledge, and the outcome's favorability on students' perceived fairness. Qualitative thematic analysis revealed students' strong desire for transparency with guidance, explainability, and interactive communication with the AI system, as well as constructive feedback and emotional support. This study contributes to the development of a justice theory in the era of AI and suggests practical design implications for AI systems and communication strategies with AI systems in education.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Internet and Higher Education
Internet and Higher Education EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
19.30
自引率
4.70%
发文量
30
审稿时长
40 days
期刊介绍: The Internet and Higher Education is a quarterly peer-reviewed journal focused on contemporary issues and future trends in online learning, teaching, and administration within post-secondary education. It welcomes contributions from diverse academic disciplines worldwide and provides a platform for theory papers, research studies, critical essays, editorials, reviews, case studies, and social commentary.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信