{"title":"Evaluating GPT ratings of EFL writing: A scoping review","authors":"Yi Chen","doi":"10.1016/j.asw.2026.101044","DOIUrl":null,"url":null,"abstract":"<div><div>The rise of large language models (LLMs), exemplified by GPT, has opened new possibilities for automated essay scoring (AES) in L2 education. Over the past three years, a growing number of studies have investigated GPT’s potential as a rater of English as a foreign language (EFL) writing. However, with no synthesis existing, the literature remains relatively fragmented. To address this gap, this scoping review analyzed 26 identified studies in terms of their research designs, evaluation foci, reported findings, and summative evaluations. Collectively, these studies addressed three core aspects of the evaluation inference in a full validity argument—accuracy, consistency, and fairness, and presented a cautiously positive view of GPT’s performance in rating EFL essays. Preliminary insights include GPT-4 and GPT-4o’s superiority over standard GPT-3.5 in accuracy and consistency, the promise of few-shot learning prompts, and GPT’s tendency to score more severely in language-related dimensions. The review also identified some methodological limitations across the literature, and highlighted key areas for further investigation. By providing a structured overview of this emerging field for the first time, this scoping review offers guidance for future research and for L2 educators considering the use of GPT models in EFL writing assessment.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101044"},"PeriodicalIF":5.5000,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessing Writing","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1075293526000322","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/4/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
The rise of large language models (LLMs), exemplified by GPT, has opened new possibilities for automated essay scoring (AES) in L2 education. Over the past three years, a growing number of studies have investigated GPT’s potential as a rater of English as a foreign language (EFL) writing. However, with no synthesis existing, the literature remains relatively fragmented. To address this gap, this scoping review analyzed 26 identified studies in terms of their research designs, evaluation foci, reported findings, and summative evaluations. Collectively, these studies addressed three core aspects of the evaluation inference in a full validity argument—accuracy, consistency, and fairness, and presented a cautiously positive view of GPT’s performance in rating EFL essays. Preliminary insights include GPT-4 and GPT-4o’s superiority over standard GPT-3.5 in accuracy and consistency, the promise of few-shot learning prompts, and GPT’s tendency to score more severely in language-related dimensions. The review also identified some methodological limitations across the literature, and highlighted key areas for further investigation. By providing a structured overview of this emerging field for the first time, this scoping review offers guidance for future research and for L2 educators considering the use of GPT models in EFL writing assessment.
期刊介绍:
Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional (direct and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development.