打破人类统治:调查学习者对生成式人工智能和人类导师的学习反馈的偏好

IF 8.1 1区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Huixiao Le, Yuan Shen, Zijian Li, Mengyu Xia, Luzhen Tang, Xinyu Li, Jiyou Jia, Qiong Wang, Dragan Gašević, Yizhou Fan
{"title":"打破人类统治:调查学习者对生成式人工智能和人类导师的学习反馈的偏好","authors":"Huixiao Le,&nbsp;Yuan Shen,&nbsp;Zijian Li,&nbsp;Mengyu Xia,&nbsp;Luzhen Tang,&nbsp;Xinyu Li,&nbsp;Jiyou Jia,&nbsp;Qiong Wang,&nbsp;Dragan Gašević,&nbsp;Yizhou Fan","doi":"10.1111/bjet.13614","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <p>Understanding learners' preferences in educational settings is crucial for optimizing learning outcomes and experience. As artificial intelligence (AI) becomes increasingly integrated into educational contexts, it is crucial to understand learners' preferences between AI and human tutors to support their learning. While AI demonstrates growing potential in education, the phenomenon of algorithm aversion, which is a tendency to favour human decision making over algorithmic solutions, requires further investigation. To explore this issue, an experiment involving 114 university students was conducted to measure learners' preferences for different feedback sources before and after exposure to one of four conditions: no feedback, human tutor feedback, ChatGPT feedback through a free-dialogue user interface, and AI-powered writing analytics tool feedback through a structured interface. Our results revealed a strong initial preference for human tutors. However, the post-task analysis showed an important nuance. While the general preference for human tutors persisted, learners' preference towards the free-dialogue interface (ChatGPT 4.0) of ChatGPT increased, whereas the structured AI interface (AI-powered writing analytics tool) reinforced the preference for human tutors. These findings offer theoretical and practical contributions by extending algorithm aversion theory to educational contexts and demonstrating that appropriate interaction design can mitigate this aversion. The success of free-dialogue interfaces suggests that overcoming algorithm aversion may depend more on creating natural, flexible interaction experiences than purely technical optimization. However, we must also consider that increased preference for AI tools, particularly those with more engaging interfaces, may potentially lead to over-reliance and metacognitive laziness among learners, highlighting the importance of balancing technological support with the development of independent learning skills.</p>\n </section>\n \n <section>\n \n <div>\n \n <div>\n \n <h3>Practitioner notes</h3>\n <p>What is already known about this topic?\n\n </p><ul>\n \n <li>Algorithm aversion exists across various contexts where individuals tend to prefer human over algorithmic decision-making.</li>\n \n <li>The introduction of generative AI brings new possibilities for AI-supported learning.</li>\n </ul>\n <p>What this paper adds?\n\n </p><ul>\n \n <li>In academic writing tasks, learners show strong initial preference for human tutors over Generative AI feedback.</li>\n \n <li>Strong initial preference for human tutors persists even after exposure to generative AI feedback.</li>\n \n <li>Different interaction designs lead to divergent preference patterns: Free-dialogue interface increases preference for AI feedback, structured interface reinforces preference for human tutors.</li>\n </ul>\n <p>Implications for practice and/or policy\n\n </p><ul>\n \n <li>Algorithm aversion in educational contexts can be mitigated through appropriate interaction design, particularly through natural dialogue interfaces.</li>\n \n <li>Design AI educational tools with back-and-forth, conversational interfaces to reduce algorithm aversion.</li>\n </ul>\n </div>\n </div>\n </section>\n </div>","PeriodicalId":48315,"journal":{"name":"British Journal of Educational Technology","volume":"56 5","pages":"1758-1783"},"PeriodicalIF":8.1000,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://bera-journals.onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13614","citationCount":"0","resultStr":"{\"title\":\"Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors\",\"authors\":\"Huixiao Le,&nbsp;Yuan Shen,&nbsp;Zijian Li,&nbsp;Mengyu Xia,&nbsp;Luzhen Tang,&nbsp;Xinyu Li,&nbsp;Jiyou Jia,&nbsp;Qiong Wang,&nbsp;Dragan Gašević,&nbsp;Yizhou Fan\",\"doi\":\"10.1111/bjet.13614\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <p>Understanding learners' preferences in educational settings is crucial for optimizing learning outcomes and experience. As artificial intelligence (AI) becomes increasingly integrated into educational contexts, it is crucial to understand learners' preferences between AI and human tutors to support their learning. While AI demonstrates growing potential in education, the phenomenon of algorithm aversion, which is a tendency to favour human decision making over algorithmic solutions, requires further investigation. To explore this issue, an experiment involving 114 university students was conducted to measure learners' preferences for different feedback sources before and after exposure to one of four conditions: no feedback, human tutor feedback, ChatGPT feedback through a free-dialogue user interface, and AI-powered writing analytics tool feedback through a structured interface. Our results revealed a strong initial preference for human tutors. However, the post-task analysis showed an important nuance. While the general preference for human tutors persisted, learners' preference towards the free-dialogue interface (ChatGPT 4.0) of ChatGPT increased, whereas the structured AI interface (AI-powered writing analytics tool) reinforced the preference for human tutors. These findings offer theoretical and practical contributions by extending algorithm aversion theory to educational contexts and demonstrating that appropriate interaction design can mitigate this aversion. The success of free-dialogue interfaces suggests that overcoming algorithm aversion may depend more on creating natural, flexible interaction experiences than purely technical optimization. However, we must also consider that increased preference for AI tools, particularly those with more engaging interfaces, may potentially lead to over-reliance and metacognitive laziness among learners, highlighting the importance of balancing technological support with the development of independent learning skills.</p>\\n </section>\\n \\n <section>\\n \\n <div>\\n \\n <div>\\n \\n <h3>Practitioner notes</h3>\\n <p>What is already known about this topic?\\n\\n </p><ul>\\n \\n <li>Algorithm aversion exists across various contexts where individuals tend to prefer human over algorithmic decision-making.</li>\\n \\n <li>The introduction of generative AI brings new possibilities for AI-supported learning.</li>\\n </ul>\\n <p>What this paper adds?\\n\\n </p><ul>\\n \\n <li>In academic writing tasks, learners show strong initial preference for human tutors over Generative AI feedback.</li>\\n \\n <li>Strong initial preference for human tutors persists even after exposure to generative AI feedback.</li>\\n \\n <li>Different interaction designs lead to divergent preference patterns: Free-dialogue interface increases preference for AI feedback, structured interface reinforces preference for human tutors.</li>\\n </ul>\\n <p>Implications for practice and/or policy\\n\\n </p><ul>\\n \\n <li>Algorithm aversion in educational contexts can be mitigated through appropriate interaction design, particularly through natural dialogue interfaces.</li>\\n \\n <li>Design AI educational tools with back-and-forth, conversational interfaces to reduce algorithm aversion.</li>\\n </ul>\\n </div>\\n </div>\\n </section>\\n </div>\",\"PeriodicalId\":48315,\"journal\":{\"name\":\"British Journal of Educational Technology\",\"volume\":\"56 5\",\"pages\":\"1758-1783\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2025-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://bera-journals.onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13614\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Educational Technology\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13614\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13614","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

了解学习者在教育环境中的偏好对于优化学习成果和体验至关重要。随着人工智能(AI)越来越多地融入教育环境,了解学习者对人工智能和人类导师的偏好以支持他们的学习至关重要。虽然人工智能在教育领域显示出越来越大的潜力,但算法厌恶现象(即倾向于人类决策而不是算法解决方案)需要进一步研究。为了探讨这一问题,研究人员对114名大学生进行了一项实验,以衡量学习者在接触以下四种情况之前和之后对不同反馈来源的偏好:无反馈、人类导师反馈、通过自由对话用户界面提供的ChatGPT反馈,以及通过结构化界面提供的人工智能写作分析工具反馈。我们的研究结果显示了对人类导师的强烈偏好。然而,任务后分析显示了一个重要的细微差别。虽然对人类导师的普遍偏好仍然存在,但学习者对ChatGPT的自由对话界面(ChatGPT 4.0)的偏好有所增加,而结构化的人工智能界面(人工智能驱动的写作分析工具)则加强了对人类导师的偏好。这些发现通过将算法厌恶理论扩展到教育环境并证明适当的交互设计可以减轻这种厌恶,提供了理论和实践贡献。自由对话界面的成功表明,克服算法厌恶可能更多地依赖于创造自然、灵活的交互体验,而不是纯粹的技术优化。然而,我们也必须考虑到,对人工智能工具的偏好增加,特别是那些具有更吸引人的界面的工具,可能会导致学习者过度依赖和元认知懒惰,这凸显了平衡技术支持与独立学习技能发展的重要性。关于这个主题我们已经知道了什么?算法厌恶存在于各种环境中,在这些环境中,个人倾向于选择人类决策而不是算法决策。生成式人工智能的引入为人工智能支持的学习带来了新的可能性。这篇文章补充了什么?在学术写作任务中,学习者表现出对人类导师的强烈偏好,而不是生成式人工智能反馈。即使在接触到生成式人工智能反馈后,对人类导师的强烈最初偏好仍然存在。不同的交互设计导致不同的偏好模式:自由对话界面增加了对AI反馈的偏好,结构化界面加强了对人类导师的偏好。通过适当的交互设计,特别是通过自然对话界面,可以减轻教育环境中算法厌恶的影响。设计具有来回对话界面的人工智能教育工具,以减少对算法的厌恶。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors

Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors

Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors

Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors

Breaking human dominance: Investigating learners' preferences for learning feedback from generative AI and human tutors

Understanding learners' preferences in educational settings is crucial for optimizing learning outcomes and experience. As artificial intelligence (AI) becomes increasingly integrated into educational contexts, it is crucial to understand learners' preferences between AI and human tutors to support their learning. While AI demonstrates growing potential in education, the phenomenon of algorithm aversion, which is a tendency to favour human decision making over algorithmic solutions, requires further investigation. To explore this issue, an experiment involving 114 university students was conducted to measure learners' preferences for different feedback sources before and after exposure to one of four conditions: no feedback, human tutor feedback, ChatGPT feedback through a free-dialogue user interface, and AI-powered writing analytics tool feedback through a structured interface. Our results revealed a strong initial preference for human tutors. However, the post-task analysis showed an important nuance. While the general preference for human tutors persisted, learners' preference towards the free-dialogue interface (ChatGPT 4.0) of ChatGPT increased, whereas the structured AI interface (AI-powered writing analytics tool) reinforced the preference for human tutors. These findings offer theoretical and practical contributions by extending algorithm aversion theory to educational contexts and demonstrating that appropriate interaction design can mitigate this aversion. The success of free-dialogue interfaces suggests that overcoming algorithm aversion may depend more on creating natural, flexible interaction experiences than purely technical optimization. However, we must also consider that increased preference for AI tools, particularly those with more engaging interfaces, may potentially lead to over-reliance and metacognitive laziness among learners, highlighting the importance of balancing technological support with the development of independent learning skills.

Practitioner notes

What is already known about this topic?

  • Algorithm aversion exists across various contexts where individuals tend to prefer human over algorithmic decision-making.
  • The introduction of generative AI brings new possibilities for AI-supported learning.

What this paper adds?

  • In academic writing tasks, learners show strong initial preference for human tutors over Generative AI feedback.
  • Strong initial preference for human tutors persists even after exposure to generative AI feedback.
  • Different interaction designs lead to divergent preference patterns: Free-dialogue interface increases preference for AI feedback, structured interface reinforces preference for human tutors.

Implications for practice and/or policy

  • Algorithm aversion in educational contexts can be mitigated through appropriate interaction design, particularly through natural dialogue interfaces.
  • Design AI educational tools with back-and-forth, conversational interfaces to reduce algorithm aversion.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
British Journal of Educational Technology
British Journal of Educational Technology EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
15.60
自引率
4.50%
发文量
111
期刊介绍: BJET is a primary source for academics and professionals in the fields of digital educational and training technology throughout the world. The Journal is published by Wiley on behalf of The British Educational Research Association (BERA). It publishes theoretical perspectives, methodological developments and high quality empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信