Assessing student perceptions and use of instructor versus AI-generated feedback

IF 6.7 1区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Erkan Er, Gökhan Akçapınar, Alper Bayazıt, Omid Noroozi, Seyyed Kazem Banihashem
{"title":"Assessing student perceptions and use of instructor versus AI-generated feedback","authors":"Erkan Er,&nbsp;Gökhan Akçapınar,&nbsp;Alper Bayazıt,&nbsp;Omid Noroozi,&nbsp;Seyyed Kazem Banihashem","doi":"10.1111/bjet.13558","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <p>Despite the growing research interest in the use of large language models for feedback provision, it still remains unknown how students perceive and use AI-generated feedback compared to instructor feedback in authentic settings. To address this gap, this study compared instructor and AI-generated feedback in a Java programming course through an experimental research design where students were randomly assigned to either condition. Both feedback providers used the same assessment rubric, and students were asked to improve their work based on the feedback. The feedback perceptions scale and students' laboratory assignment scores were compared in both conditions. Results showed that students perceived instructor feedback as significantly more useful than AI feedback. While instructor feedback was also perceived as more fair, developmental and encouraging, these differences were not statistically significant. Importantly, students receiving instructor feedback showed significantly greater improvements in their lab scores compared to those receiving AI feedback, even after controlling for their initial knowledge levels. Based on the findings, we posit that AI models potentially need to be trained on data specific to educational contexts and hybrid feedback models that combine AI's and instructors' strengths should be considered for effective feedback practices.</p>\n </section>\n \n <section>\n \n <div>\n \n <div>\n \n <h3>Practitioner notes</h3>\n <p>What is already known about this topic\n </p><ul>\n \n <li>Feedback is crucial for student learning in programming education.</li>\n \n <li>Providing detailed personalised feedback is challenging for instructors.</li>\n \n <li>AI-powered solutions like ChatGPT can be effective in feedback provision.</li>\n \n <li>Existing research is limited and shows mixed results about AI-generated feedback.</li>\n </ul>\n \n <p>What this paper adds\n </p><ul>\n \n <li>The effectiveness of AI-generated feedback was compared to instructor feedback.</li>\n \n <li>Both feedback types received positive perceptions, but instructor feedback was seen as more useful.</li>\n \n <li>Instructor feedback led to greater score improvements in the programming task.</li>\n </ul>\n \n <p>Implications for practice and/or policy\n </p><ul>\n \n <li>AI should not be the sole source of feedback, as human expertise is crucial.</li>\n \n <li>AI models should be trained on context-specific data to improve feedback actionability.</li>\n \n <li>Hybrid feedback models should be considered for a scalable and effective approach.</li>\n </ul>\n \n </div>\n </div>\n </section>\n </div>","PeriodicalId":48315,"journal":{"name":"British Journal of Educational Technology","volume":"56 3","pages":"1074-1091"},"PeriodicalIF":6.7000,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/bjet.13558","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

Despite the growing research interest in the use of large language models for feedback provision, it still remains unknown how students perceive and use AI-generated feedback compared to instructor feedback in authentic settings. To address this gap, this study compared instructor and AI-generated feedback in a Java programming course through an experimental research design where students were randomly assigned to either condition. Both feedback providers used the same assessment rubric, and students were asked to improve their work based on the feedback. The feedback perceptions scale and students' laboratory assignment scores were compared in both conditions. Results showed that students perceived instructor feedback as significantly more useful than AI feedback. While instructor feedback was also perceived as more fair, developmental and encouraging, these differences were not statistically significant. Importantly, students receiving instructor feedback showed significantly greater improvements in their lab scores compared to those receiving AI feedback, even after controlling for their initial knowledge levels. Based on the findings, we posit that AI models potentially need to be trained on data specific to educational contexts and hybrid feedback models that combine AI's and instructors' strengths should be considered for effective feedback practices.

Practitioner notes

What is already known about this topic

  • Feedback is crucial for student learning in programming education.
  • Providing detailed personalised feedback is challenging for instructors.
  • AI-powered solutions like ChatGPT can be effective in feedback provision.
  • Existing research is limited and shows mixed results about AI-generated feedback.

What this paper adds

  • The effectiveness of AI-generated feedback was compared to instructor feedback.
  • Both feedback types received positive perceptions, but instructor feedback was seen as more useful.
  • Instructor feedback led to greater score improvements in the programming task.

Implications for practice and/or policy

  • AI should not be the sole source of feedback, as human expertise is crucial.
  • AI models should be trained on context-specific data to improve feedback actionability.
  • Hybrid feedback models should be considered for a scalable and effective approach.
求助全文
约1分钟内获得全文 求助全文
来源期刊
British Journal of Educational Technology
British Journal of Educational Technology EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
15.60
自引率
4.50%
发文量
111
期刊介绍: BJET is a primary source for academics and professionals in the fields of digital educational and training technology throughout the world. The Journal is published by Wiley on behalf of The British Educational Research Association (BERA). It publishes theoretical perspectives, methodological developments and high quality empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信