Huixiao Le, Yuan Shen, Zijian Li, Mengyu Xia, Luzhen Tang, Xinyu Li, Jiyou Jia, Qiong Wang, Dragan Gašević, Yizhou Fan
{"title":"打破人类统治:调查学习者对生成式人工智能和人类导师的学习反馈的偏好","authors":"Huixiao Le, Yuan Shen, Zijian Li, Mengyu Xia, Luzhen Tang, Xinyu Li, Jiyou Jia, Qiong Wang, Dragan Gašević, Yizhou Fan","doi":"10.1111/bjet.13614","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <p>Understanding learners' preferences in educational settings is crucial for optimizing learning outcomes and experience. As artificial intelligence (AI) becomes increasingly integrated into educational contexts, it is crucial to understand learners' preferences between AI and human tutors to support their learning. While AI demonstrates growing potential in education, the phenomenon of algorithm aversion, which is a tendency to favour human decision making over algorithmic solutions, requires further investigation. To explore this issue, an experiment involving 114 university students was conducted to measure learners' preferences for different feedback sources before and after exposure to one of four conditions: no feedback, human tutor feedback, ChatGPT feedback through a free-dialogue user interface, and AI-powered writing analytics tool feedback through a structured interface. Our results revealed a strong initial preference for human tutors. However, the post-task analysis showed an important nuance. While the general preference for human tutors persisted, learners' preference towards the free-dialogue interface (ChatGPT 4.0) of ChatGPT increased, whereas the structured AI interface (AI-powered writing analytics tool) reinforced the preference for human tutors. These findings offer theoretical and practical contributions by extending algorithm aversion theory to educational contexts and demonstrating that appropriate interaction design can mitigate this aversion. The success of free-dialogue interfaces suggests that overcoming algorithm aversion may depend more on creating natural, flexible interaction experiences than purely technical optimization. However, we must also consider that increased preference for AI tools, particularly those with more engaging interfaces, may potentially lead to over-reliance and metacognitive laziness among learners, highlighting the importance of balancing technological support with the development of independent learning skills.</p>\n </section>\n \n <section>\n \n <div>\n \n <div>\n \n <h3>Practitioner notes</h3>\n <p>What is already known about this topic?\n\n </p><ul>\n \n <li>Algorithm aversion exists across various contexts where individuals tend to prefer human over algorithmic decision-making.</li>\n \n <li>The introduction of generative AI brings new possibilities for AI-supported learning.</li>\n </ul>\n <p>What this paper adds?\n\n </p><ul>\n \n <li>In academic writing tasks, learners show strong initial preference for human tutors over Generative AI feedback.</li>\n \n <li>Strong initial preference for human tutors persists even after exposure to generative AI feedback.</li>\n \n <li>Different interaction designs lead to divergent preference patterns: Free-dialogue interface increases preference for AI feedback, structured interface reinforces preference for human tutors.</li>\n </ul>\n <p>Implications for practice and/or policy\n\n </p><ul>\n \n <li>Algorithm aversion in educational contexts can be mitigated through appropriate interaction design, particularly through natural dialogue interfaces.</li>\n \n <li>Design AI educational tools with back-and-forth, conversational interfaces to reduce algorithm aversion.</li>\n </ul>\n </div>\n </div>\n </section>\n </div>","PeriodicalId":48315,"journal":{"name":"British Journal of Educational Technology","volume":"56 5","pages":"1758-1783"},"PeriodicalIF":8.1000,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://bera-journals.onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13614","citationCount":"0","resultStr":"{\"title\":\"Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors\",\"authors\":\"Huixiao Le, Yuan Shen, Zijian Li, Mengyu Xia, Luzhen Tang, Xinyu Li, Jiyou Jia, Qiong Wang, Dragan Gašević, Yizhou Fan\",\"doi\":\"10.1111/bjet.13614\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <p>Understanding learners' preferences in educational settings is crucial for optimizing learning outcomes and experience. As artificial intelligence (AI) becomes increasingly integrated into educational contexts, it is crucial to understand learners' preferences between AI and human tutors to support their learning. While AI demonstrates growing potential in education, the phenomenon of algorithm aversion, which is a tendency to favour human decision making over algorithmic solutions, requires further investigation. To explore this issue, an experiment involving 114 university students was conducted to measure learners' preferences for different feedback sources before and after exposure to one of four conditions: no feedback, human tutor feedback, ChatGPT feedback through a free-dialogue user interface, and AI-powered writing analytics tool feedback through a structured interface. Our results revealed a strong initial preference for human tutors. However, the post-task analysis showed an important nuance. While the general preference for human tutors persisted, learners' preference towards the free-dialogue interface (ChatGPT 4.0) of ChatGPT increased, whereas the structured AI interface (AI-powered writing analytics tool) reinforced the preference for human tutors. These findings offer theoretical and practical contributions by extending algorithm aversion theory to educational contexts and demonstrating that appropriate interaction design can mitigate this aversion. The success of free-dialogue interfaces suggests that overcoming algorithm aversion may depend more on creating natural, flexible interaction experiences than purely technical optimization. However, we must also consider that increased preference for AI tools, particularly those with more engaging interfaces, may potentially lead to over-reliance and metacognitive laziness among learners, highlighting the importance of balancing technological support with the development of independent learning skills.</p>\\n </section>\\n \\n <section>\\n \\n <div>\\n \\n <div>\\n \\n <h3>Practitioner notes</h3>\\n <p>What is already known about this topic?\\n\\n </p><ul>\\n \\n <li>Algorithm aversion exists across various contexts where individuals tend to prefer human over algorithmic decision-making.</li>\\n \\n <li>The introduction of generative AI brings new possibilities for AI-supported learning.</li>\\n </ul>\\n <p>What this paper adds?\\n\\n </p><ul>\\n \\n <li>In academic writing tasks, learners show strong initial preference for human tutors over Generative AI feedback.</li>\\n \\n <li>Strong initial preference for human tutors persists even after exposure to generative AI feedback.</li>\\n \\n <li>Different interaction designs lead to divergent preference patterns: Free-dialogue interface increases preference for AI feedback, structured interface reinforces preference for human tutors.</li>\\n </ul>\\n <p>Implications for practice and/or policy\\n\\n </p><ul>\\n \\n <li>Algorithm aversion in educational contexts can be mitigated through appropriate interaction design, particularly through natural dialogue interfaces.</li>\\n \\n <li>Design AI educational tools with back-and-forth, conversational interfaces to reduce algorithm aversion.</li>\\n </ul>\\n </div>\\n </div>\\n </section>\\n </div>\",\"PeriodicalId\":48315,\"journal\":{\"name\":\"British Journal of Educational Technology\",\"volume\":\"56 5\",\"pages\":\"1758-1783\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2025-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://bera-journals.onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13614\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Educational Technology\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13614\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13614","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors
Understanding learners' preferences in educational settings is crucial for optimizing learning outcomes and experience. As artificial intelligence (AI) becomes increasingly integrated into educational contexts, it is crucial to understand learners' preferences between AI and human tutors to support their learning. While AI demonstrates growing potential in education, the phenomenon of algorithm aversion, which is a tendency to favour human decision making over algorithmic solutions, requires further investigation. To explore this issue, an experiment involving 114 university students was conducted to measure learners' preferences for different feedback sources before and after exposure to one of four conditions: no feedback, human tutor feedback, ChatGPT feedback through a free-dialogue user interface, and AI-powered writing analytics tool feedback through a structured interface. Our results revealed a strong initial preference for human tutors. However, the post-task analysis showed an important nuance. While the general preference for human tutors persisted, learners' preference towards the free-dialogue interface (ChatGPT 4.0) of ChatGPT increased, whereas the structured AI interface (AI-powered writing analytics tool) reinforced the preference for human tutors. These findings offer theoretical and practical contributions by extending algorithm aversion theory to educational contexts and demonstrating that appropriate interaction design can mitigate this aversion. The success of free-dialogue interfaces suggests that overcoming algorithm aversion may depend more on creating natural, flexible interaction experiences than purely technical optimization. However, we must also consider that increased preference for AI tools, particularly those with more engaging interfaces, may potentially lead to over-reliance and metacognitive laziness among learners, highlighting the importance of balancing technological support with the development of independent learning skills.
Practitioner notes
What is already known about this topic?
Algorithm aversion exists across various contexts where individuals tend to prefer human over algorithmic decision-making.
The introduction of generative AI brings new possibilities for AI-supported learning.
What this paper adds?
In academic writing tasks, learners show strong initial preference for human tutors over Generative AI feedback.
Strong initial preference for human tutors persists even after exposure to generative AI feedback.
Different interaction designs lead to divergent preference patterns: Free-dialogue interface increases preference for AI feedback, structured interface reinforces preference for human tutors.
Implications for practice and/or policy
Algorithm aversion in educational contexts can be mitigated through appropriate interaction design, particularly through natural dialogue interfaces.
Design AI educational tools with back-and-forth, conversational interfaces to reduce algorithm aversion.
期刊介绍:
BJET is a primary source for academics and professionals in the fields of digital educational and training technology throughout the world. The Journal is published by Wiley on behalf of The British Educational Research Association (BERA). It publishes theoretical perspectives, methodological developments and high quality empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.