Ronja Schiller , Johanna Fleckenstein , Lars Höft , Andrea Horbach , Jennifer Meyer
{"title":"关于参与在自动反馈有效性中的作用:来自击键记录的见解","authors":"Ronja Schiller , Johanna Fleckenstein , Lars Höft , Andrea Horbach , Jennifer Meyer","doi":"10.1016/j.compedu.2025.105386","DOIUrl":null,"url":null,"abstract":"<div><div>Feedback research increasingly focuses on the role of learners’ engagement in the feedback process. Process measures from technology-based learning environments that reflect writing behavior can provide new insights into the mechanisms underlying feedback effectiveness by making engagement visible. Previous research has shown that log data and similarity measures mediate the effects of automated feedback on learners’ revision performance. In the present study, we aimed to replicate and extend previous research using measures obtained from keystroke logging that represent the revision process on a more fine-grained level. We considered behavioral engagement (i.e., number of keystrokes and typing time) and writing pauses as potential indicators of cognitive engagement. In a classroom experiment, <em>N</em> = 453 English-as-a-foreign-language (EFL) learners (<em>M</em><sub>age</sub> = 16.11) completed a writing task and revised their draft, receiving either feedback generated by a large language model (i.e., GPT 3.5 Turbo) or no feedback. A second writing task served as a transfer task. All texts were scored automatically to assess performance. The effect of automated feedback on learners’ revision and transfer performance was mediated through the different indicators of behavioral engagement during the text revision, although the direct effect of automated feedback on the transfer task was not significant. We found small effects of feedback on pause length and the number of pauses, but the indirect effects were not significant. The study provides further evidence on the role of learning engagement in feedback effectiveness and illustrates how online measures (i.e., keystroke logging) can be used to gain new insights into the effectiveness of automated feedback. The use of different process measures to assess learning engagement is discussed.</div></div>","PeriodicalId":10568,"journal":{"name":"Computers & Education","volume":"238 ","pages":"Article 105386"},"PeriodicalIF":10.5000,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On the role of engagement in automated feedback effectiveness: Insights from keystroke logging\",\"authors\":\"Ronja Schiller , Johanna Fleckenstein , Lars Höft , Andrea Horbach , Jennifer Meyer\",\"doi\":\"10.1016/j.compedu.2025.105386\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Feedback research increasingly focuses on the role of learners’ engagement in the feedback process. Process measures from technology-based learning environments that reflect writing behavior can provide new insights into the mechanisms underlying feedback effectiveness by making engagement visible. Previous research has shown that log data and similarity measures mediate the effects of automated feedback on learners’ revision performance. In the present study, we aimed to replicate and extend previous research using measures obtained from keystroke logging that represent the revision process on a more fine-grained level. We considered behavioral engagement (i.e., number of keystrokes and typing time) and writing pauses as potential indicators of cognitive engagement. In a classroom experiment, <em>N</em> = 453 English-as-a-foreign-language (EFL) learners (<em>M</em><sub>age</sub> = 16.11) completed a writing task and revised their draft, receiving either feedback generated by a large language model (i.e., GPT 3.5 Turbo) or no feedback. A second writing task served as a transfer task. All texts were scored automatically to assess performance. The effect of automated feedback on learners’ revision and transfer performance was mediated through the different indicators of behavioral engagement during the text revision, although the direct effect of automated feedback on the transfer task was not significant. We found small effects of feedback on pause length and the number of pauses, but the indirect effects were not significant. The study provides further evidence on the role of learning engagement in feedback effectiveness and illustrates how online measures (i.e., keystroke logging) can be used to gain new insights into the effectiveness of automated feedback. The use of different process measures to assess learning engagement is discussed.</div></div>\",\"PeriodicalId\":10568,\"journal\":{\"name\":\"Computers & Education\",\"volume\":\"238 \",\"pages\":\"Article 105386\"},\"PeriodicalIF\":10.5000,\"publicationDate\":\"2025-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers & Education\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S036013152500154X\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Education","FirstCategoryId":"95","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S036013152500154X","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
On the role of engagement in automated feedback effectiveness: Insights from keystroke logging
Feedback research increasingly focuses on the role of learners’ engagement in the feedback process. Process measures from technology-based learning environments that reflect writing behavior can provide new insights into the mechanisms underlying feedback effectiveness by making engagement visible. Previous research has shown that log data and similarity measures mediate the effects of automated feedback on learners’ revision performance. In the present study, we aimed to replicate and extend previous research using measures obtained from keystroke logging that represent the revision process on a more fine-grained level. We considered behavioral engagement (i.e., number of keystrokes and typing time) and writing pauses as potential indicators of cognitive engagement. In a classroom experiment, N = 453 English-as-a-foreign-language (EFL) learners (Mage = 16.11) completed a writing task and revised their draft, receiving either feedback generated by a large language model (i.e., GPT 3.5 Turbo) or no feedback. A second writing task served as a transfer task. All texts were scored automatically to assess performance. The effect of automated feedback on learners’ revision and transfer performance was mediated through the different indicators of behavioral engagement during the text revision, although the direct effect of automated feedback on the transfer task was not significant. We found small effects of feedback on pause length and the number of pauses, but the indirect effects were not significant. The study provides further evidence on the role of learning engagement in feedback effectiveness and illustrates how online measures (i.e., keystroke logging) can be used to gain new insights into the effectiveness of automated feedback. The use of different process measures to assess learning engagement is discussed.
期刊介绍:
Computers & Education seeks to advance understanding of how digital technology can improve education by publishing high-quality research that expands both theory and practice. The journal welcomes research papers exploring the pedagogical applications of digital technology, with a focus broad enough to appeal to the wider education community.