Siyu Zhu , Qingyang Li , Yuan Yao , Jialin Li , Xinhua Zhu
{"title":"Improving writing feedback quality and self-efficacy of pre-service teachers in Gen-AI contexts: An experimental mixed-method design","authors":"Siyu Zhu , Qingyang Li , Yuan Yao , Jialin Li , Xinhua Zhu","doi":"10.1016/j.asw.2025.100960","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid advancement of Generative AI (Gen-AI), such as ChatGPT, presents both opportunities and challenges for teacher education. For pre-service teachers (PSTs), Gen-AI offers new tools to enhance the efficiency and quality of writing feedback. However, it also raises concerns, as many PSTs lack classroom experience, confidence in giving feedback, and knowledge of how to effectively integrate AI-generated content into instructional practice. To address these issues, this study adopted a pre-post experimental design to examine the effects of targeted training on PSTs’ provision of writing feedback, with a focus on feedback quality, self-efficacy, and their relationship in ChatGPT-supported contexts. Over a two-week training program with 30 PSTs, Wilcoxon signed-rank test results from the content analysis showed significant improvements in feedback quality and self-efficacy. Semi-structured interviews with eight participants identified cognitive changes and enhanced ChatGPT operational skills as key drivers of these improvements. We reaffirmed that mastery and vicarious experiences are crucial for enhancing teacher self-efficacy. Furthermore, a reciprocal relationship was observed between the quality and self-efficacy in providing ChatGPT-assisted feedback. This study contributes to the broader discourse on ChatGPT in education and offers specific strategies for effectively incorporating new technology into teacher training.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"66 ","pages":"Article 100960"},"PeriodicalIF":4.2000,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessing Writing","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1075293525000479","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid advancement of Generative AI (Gen-AI), such as ChatGPT, presents both opportunities and challenges for teacher education. For pre-service teachers (PSTs), Gen-AI offers new tools to enhance the efficiency and quality of writing feedback. However, it also raises concerns, as many PSTs lack classroom experience, confidence in giving feedback, and knowledge of how to effectively integrate AI-generated content into instructional practice. To address these issues, this study adopted a pre-post experimental design to examine the effects of targeted training on PSTs’ provision of writing feedback, with a focus on feedback quality, self-efficacy, and their relationship in ChatGPT-supported contexts. Over a two-week training program with 30 PSTs, Wilcoxon signed-rank test results from the content analysis showed significant improvements in feedback quality and self-efficacy. Semi-structured interviews with eight participants identified cognitive changes and enhanced ChatGPT operational skills as key drivers of these improvements. We reaffirmed that mastery and vicarious experiences are crucial for enhancing teacher self-efficacy. Furthermore, a reciprocal relationship was observed between the quality and self-efficacy in providing ChatGPT-assisted feedback. This study contributes to the broader discourse on ChatGPT in education and offers specific strategies for effectively incorporating new technology into teacher training.
期刊介绍:
Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional (direct and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development.