Elisabeth Bauer, Constanze Richters, Amadeus J. Pickal, Moritz Klippert, Michael Sailer, Matthias Stadler
{"title":"Effects of AI-generated adaptive feedback on statistical skills and interest in statistics: A field experiment in higher education","authors":"Elisabeth Bauer, Constanze Richters, Amadeus J. Pickal, Moritz Klippert, Michael Sailer, Matthias Stadler","doi":"10.1111/bjet.13609","DOIUrl":null,"url":null,"abstract":"<p>This study explores whether AI-generated adaptive feedback or static feedback is favourable for student interest and performance outcomes in learning statistics in a digital learning environment. Previous studies have favoured adaptive feedback over static feedback for skill acquisition, however, without investigating the outcome of students' subject-specific interest. This study randomly assigned 90 educational sciences students to four conditions in a 2 × 2 Solomon four-group design, with one factor <i>feedback type</i> (adaptive vs. static) and, controlling for pretest sensitisation, another factor <i>pretest participation</i> (yes vs. no). Using a large language model, the adaptive feedback provided feedback messages tailored to students' responses for several tasks on reporting statistical results according to APA style, while static feedback offered a standardised expert solution. There was no evidence of pretest sensitisation and no significant effect of the feedback type on task performance. However, a significant medium-sized effect of feedback type on interest was found, with lower interest observed in the adaptive condition than in the static condition. In highly structured learning tasks, AI-generated adaptive feedback, compared with static feedback, may be non-essential for learners' performance enhancement and less favourable for learners' interest, potentially due to its impact on learners' perceived autonomy and competence.</p>","PeriodicalId":48315,"journal":{"name":"British Journal of Educational Technology","volume":"56 5","pages":"1735-1757"},"PeriodicalIF":8.1000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://bera-journals.onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13609","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://bera-journals.onlinelibrary.wiley.com/doi/10.1111/bjet.13609","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
This study explores whether AI-generated adaptive feedback or static feedback is favourable for student interest and performance outcomes in learning statistics in a digital learning environment. Previous studies have favoured adaptive feedback over static feedback for skill acquisition, however, without investigating the outcome of students' subject-specific interest. This study randomly assigned 90 educational sciences students to four conditions in a 2 × 2 Solomon four-group design, with one factor feedback type (adaptive vs. static) and, controlling for pretest sensitisation, another factor pretest participation (yes vs. no). Using a large language model, the adaptive feedback provided feedback messages tailored to students' responses for several tasks on reporting statistical results according to APA style, while static feedback offered a standardised expert solution. There was no evidence of pretest sensitisation and no significant effect of the feedback type on task performance. However, a significant medium-sized effect of feedback type on interest was found, with lower interest observed in the adaptive condition than in the static condition. In highly structured learning tasks, AI-generated adaptive feedback, compared with static feedback, may be non-essential for learners' performance enhancement and less favourable for learners' interest, potentially due to its impact on learners' perceived autonomy and competence.
期刊介绍:
BJET is a primary source for academics and professionals in the fields of digital educational and training technology throughout the world. The Journal is published by Wiley on behalf of The British Educational Research Association (BERA). It publishes theoretical perspectives, methodological developments and high quality empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.