{"title":"The impact of self-revision, machine translation, and ChatGPT on L2 writing: Raters’ assessments, linguistic complexity, and error correction","authors":"Minjoo Kim , Yuah V. Chon","doi":"10.1016/j.asw.2025.100950","DOIUrl":null,"url":null,"abstract":"<div><div>This study explores how learners in a South Korean high school English as a Foreign Language (EFL) context can effectively use neural machine translation (MT) and ChatGPT to enhance their L2 writing. While recent AI tools offer significant potential for supporting human writing feedback, a comparative analysis of how these tools impact writing outcomes—compared to when L2 writers independently proofread and revise their writing—has not been fully examined. To address this gap, a controlled experiment was conducted using three distinct proofreading interventions—self-proofreading (SP), MT-assisted proofreading (MAP), and ChatGPT-assisted proofreading (CAP). Learners were encouraged to first compose their texts in their L2 and then use either MT through inverse translation or ChatGPT through a structured proofreading process. The findings revealed that learners using MAP and CAP demonstrated substantial improvements in overall writing quality compared to those relying solely on SP. CAP users, in particular, produced longer texts, exhibited greater lexical diversity, and constructed more complex sentences, although this was accompanied by reduced verb cohesion. Both MAP and CAP significantly reduced grammatical errors, but did not affect prepositional errors. These findings provide practical recommendations for integrating MT and ChatGPT into L2 writing pedagogy.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100950"},"PeriodicalIF":4.2000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessing Writing","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1075293525000376","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
This study explores how learners in a South Korean high school English as a Foreign Language (EFL) context can effectively use neural machine translation (MT) and ChatGPT to enhance their L2 writing. While recent AI tools offer significant potential for supporting human writing feedback, a comparative analysis of how these tools impact writing outcomes—compared to when L2 writers independently proofread and revise their writing—has not been fully examined. To address this gap, a controlled experiment was conducted using three distinct proofreading interventions—self-proofreading (SP), MT-assisted proofreading (MAP), and ChatGPT-assisted proofreading (CAP). Learners were encouraged to first compose their texts in their L2 and then use either MT through inverse translation or ChatGPT through a structured proofreading process. The findings revealed that learners using MAP and CAP demonstrated substantial improvements in overall writing quality compared to those relying solely on SP. CAP users, in particular, produced longer texts, exhibited greater lexical diversity, and constructed more complex sentences, although this was accompanied by reduced verb cohesion. Both MAP and CAP significantly reduced grammatical errors, but did not affect prepositional errors. These findings provide practical recommendations for integrating MT and ChatGPT into L2 writing pedagogy.
期刊介绍:
Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional (direct and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development.