{"title":"Assessing student perceptions and use of instructor versus AI-generated feedback","authors":"Erkan Er, Gökhan Akçapınar, Alper Bayazıt, Omid Noroozi, Seyyed Kazem Banihashem","doi":"10.1111/bjet.13558","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <p>Despite the growing research interest in the use of large language models for feedback provision, it still remains unknown how students perceive and use AI-generated feedback compared to instructor feedback in authentic settings. To address this gap, this study compared instructor and AI-generated feedback in a Java programming course through an experimental research design where students were randomly assigned to either condition. Both feedback providers used the same assessment rubric, and students were asked to improve their work based on the feedback. The feedback perceptions scale and students' laboratory assignment scores were compared in both conditions. Results showed that students perceived instructor feedback as significantly more useful than AI feedback. While instructor feedback was also perceived as more fair, developmental and encouraging, these differences were not statistically significant. Importantly, students receiving instructor feedback showed significantly greater improvements in their lab scores compared to those receiving AI feedback, even after controlling for their initial knowledge levels. Based on the findings, we posit that AI models potentially need to be trained on data specific to educational contexts and hybrid feedback models that combine AI's and instructors' strengths should be considered for effective feedback practices.</p>\n </section>\n \n <section>\n \n <div>\n \n <div>\n \n <h3>Practitioner notes</h3>\n <p>What is already known about this topic\n </p><ul>\n \n <li>Feedback is crucial for student learning in programming education.</li>\n \n <li>Providing detailed personalised feedback is challenging for instructors.</li>\n \n <li>AI-powered solutions like ChatGPT can be effective in feedback provision.</li>\n \n <li>Existing research is limited and shows mixed results about AI-generated feedback.</li>\n </ul>\n \n <p>What this paper adds\n </p><ul>\n \n <li>The effectiveness of AI-generated feedback was compared to instructor feedback.</li>\n \n <li>Both feedback types received positive perceptions, but instructor feedback was seen as more useful.</li>\n \n <li>Instructor feedback led to greater score improvements in the programming task.</li>\n </ul>\n \n <p>Implications for practice and/or policy\n </p><ul>\n \n <li>AI should not be the sole source of feedback, as human expertise is crucial.</li>\n \n <li>AI models should be trained on context-specific data to improve feedback actionability.</li>\n \n <li>Hybrid feedback models should be considered for a scalable and effective approach.</li>\n </ul>\n \n </div>\n </div>\n </section>\n </div>","PeriodicalId":48315,"journal":{"name":"British Journal of Educational Technology","volume":"56 3","pages":"1074-1091"},"PeriodicalIF":6.7000,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/bjet.13558","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Despite the growing research interest in the use of large language models for feedback provision, it still remains unknown how students perceive and use AI-generated feedback compared to instructor feedback in authentic settings. To address this gap, this study compared instructor and AI-generated feedback in a Java programming course through an experimental research design where students were randomly assigned to either condition. Both feedback providers used the same assessment rubric, and students were asked to improve their work based on the feedback. The feedback perceptions scale and students' laboratory assignment scores were compared in both conditions. Results showed that students perceived instructor feedback as significantly more useful than AI feedback. While instructor feedback was also perceived as more fair, developmental and encouraging, these differences were not statistically significant. Importantly, students receiving instructor feedback showed significantly greater improvements in their lab scores compared to those receiving AI feedback, even after controlling for their initial knowledge levels. Based on the findings, we posit that AI models potentially need to be trained on data specific to educational contexts and hybrid feedback models that combine AI's and instructors' strengths should be considered for effective feedback practices.
Practitioner notes
What is already known about this topic
Feedback is crucial for student learning in programming education.
Providing detailed personalised feedback is challenging for instructors.
AI-powered solutions like ChatGPT can be effective in feedback provision.
Existing research is limited and shows mixed results about AI-generated feedback.
What this paper adds
The effectiveness of AI-generated feedback was compared to instructor feedback.
Both feedback types received positive perceptions, but instructor feedback was seen as more useful.
Instructor feedback led to greater score improvements in the programming task.
Implications for practice and/or policy
AI should not be the sole source of feedback, as human expertise is crucial.
AI models should be trained on context-specific data to improve feedback actionability.
Hybrid feedback models should be considered for a scalable and effective approach.
期刊介绍:
BJET is a primary source for academics and professionals in the fields of digital educational and training technology throughout the world. The Journal is published by Wiley on behalf of The British Educational Research Association (BERA). It publishes theoretical perspectives, methodological developments and high quality empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.