Assessing WritingPub Date : 2025-04-04DOI: 10.1016/j.asw.2025.100937
Peter Thwaites , Pauline Jadoulle , Magali Paquot
{"title":"Comparative judgment in L2 writing assessment: Reliability and validity across crowdsourced, community-driven, and trained rater groups of judges","authors":"Peter Thwaites , Pauline Jadoulle , Magali Paquot","doi":"10.1016/j.asw.2025.100937","DOIUrl":"10.1016/j.asw.2025.100937","url":null,"abstract":"<div><div>Several recent studies have explored the use of comparative judgement for assessing second language writing. One of the claimed advantages of this method is that it generates valid assessments even when judgements are conducted by individuals outside of the traditional language assessment community. However, evidence in support of this claim largely focuses on concurrent validity – i.e. the extent to which CJ rating scales generated by various groups of judges correlate with rubric-based assessments. Little evidence exists of the construct validity of using CJ for L2 writing assessment. The present study seeks to address this by exploring what judges pay attention to while making comparative judgements. Three distinct groups of judges assessed the same set of 25 English L2 argumentative essays, leaving comments after each of their decisions. These comments were then analysed in order to explore the construct relevance and construct representativeness of each judge group’s rating scale. The results suggest that these scales differ in the extent to which they can be considered valid assessments of the target essays.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100937"},"PeriodicalIF":4.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-03-26DOI: 10.1016/j.asw.2025.100936
Xian Liao , Pengfei Zhao , Zicheng Li
{"title":"The relationship between executive functions, source use, and integrated writing performance","authors":"Xian Liao , Pengfei Zhao , Zicheng Li","doi":"10.1016/j.asw.2025.100936","DOIUrl":"10.1016/j.asw.2025.100936","url":null,"abstract":"<div><div>An accurate assessment of writing relies on a thorough understanding of its underlying processes and related factors. While integrated writing (IW) is crucial for students’ academic success and future career development, the factors influencing performance in such complex tasks remain under scientific investigation. In particular, although the core role of source use in completing IW tasks is widely acknowledged, we still need to explore factors that could facilitate writers’ effective use of sources. While recent studies have highlighted the critical role of executive functions (EFs)—such as working memory, inhibition, and cognitive flexibility—during writing activities, the exact influence of these foundational cognitive skills on source use and IW performance remains unclear. To this end, this study recruited 233 secondary students in Hong Kong to complete a set of standardized EF tasks and a Chinese reading-to-write IW task. The students’ written products were analyzed regarding the use of content ideas and linguistic transformation based on source materials. We found that visual-spatial working memory had a significant direct effect on IW performance. Two critical aspects of source use—ideas from sources and near copy—mediated the relationship between EF skills and IW performance. These findings contribute to our understanding of the role of EF skills in complex IW tasks. We highlight the implications of our results for the assessment, teaching, and learning of integrated writing.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100936"},"PeriodicalIF":4.2,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-03-22DOI: 10.1016/j.asw.2025.100935
Xiaolong Cheng , Jinfen Xu
{"title":"A mixed-methods approach to English-L1 teachers’ implementation of written feedback in EFL classrooms","authors":"Xiaolong Cheng , Jinfen Xu","doi":"10.1016/j.asw.2025.100935","DOIUrl":"10.1016/j.asw.2025.100935","url":null,"abstract":"<div><div>While there are copious studies investigating teacher written feedback in L2 writing contexts, much remains to be discovered about how English-L1 teachers enact this practice in EFL classrooms. To fill this gap, employing a mixed-methods approach, this study collected data from multiple sources including questionnaires, semi-structured interviews, students’ writing samples, stimulated recalls, and documents to examine such teachers’ implementation of written feedback and influencing factors in Chinese tertiary EFL settings. The results of survey study were generally in line with those of in-depth study in terms of feedback scope, strategy, and focus, but differences emerged in feedback orientation. Furthermore, both the quantitative and qualitative results found that the teachers’ provision of written feedback was mediated by a synthesis of factors related to teachers, students, and contexts. Important pedagogical implications are also discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"65 ","pages":"Article 100935"},"PeriodicalIF":4.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-02-28DOI: 10.1016/j.asw.2025.100921
Andrew Potter , Mitchell Shortt , Maria Goldshtein , Rod D. Roscoe
{"title":"Assessing academic language in tenth grade essays using natural language processing","authors":"Andrew Potter , Mitchell Shortt , Maria Goldshtein , Rod D. Roscoe","doi":"10.1016/j.asw.2025.100921","DOIUrl":"10.1016/j.asw.2025.100921","url":null,"abstract":"<div><div>Broadly defined, academic language (AL) is a set of lexical-grammatical norms and registers commonly used in educational and academic discourse. Mastery of academic language in writing is an important aspect of writing instruction and assessment. The purpose of this study was to use Natural Language Processing (NLP) tools to examine the extent to which features related to academic language explained variance in human-assigned scores of writing quality in a large corpus of source-based argumentative essays (n = 20,820) written by 10th grade students. Using NLP tools, we identified and then calculated linguistic features from essays related to the lexical, syntactic, cohesion, and rhetorical features of academic language. Consistent with prior research findings, results from a hierarchical linear regression revealed that AL features explained 8 % of variance in writing quality when controlling for essay length. The most important AL features included cohesion with the source text, academic wording, and global cohesion. Implications for integrating NLP-produced measures of AL in writing assessment and automated writing evaluation (AWE) systems are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"64 ","pages":"Article 100921"},"PeriodicalIF":4.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-02-25DOI: 10.1016/j.asw.2025.100934
Kwangmin Lee , Ray J.T. Liao , I.-Chun Vera Hsiao , Junhee Park , Yafei Ye
{"title":"Predicting inappropriate source use from scores of language use, source comprehension, and organizational features: A study using generalized linear models","authors":"Kwangmin Lee , Ray J.T. Liao , I.-Chun Vera Hsiao , Junhee Park , Yafei Ye","doi":"10.1016/j.asw.2025.100934","DOIUrl":"10.1016/j.asw.2025.100934","url":null,"abstract":"<div><div>This paper examines the extent to which inappropriate source use – verbatim source use and patchwriting – can be predicted by scores of other textual features that are commonly evaluated in second/foreign language (L2) integrated writing assessment. 246 advanced-level English as a Foreign Language (EFL) test-takers enrolled in a Chinese higher education institution provided integrated essays that required both summary and argumentation. All the collected essays were rated by two experienced raters and checked for interrater reliability by way of generalizability theory. Then, a series of generalized linear models was compared to identify the best-fitting model that explained the relationship between the independent variables and inappropriate source use. Results indicated that the zero-inflated beta-binomial provided the best fit to the data, with approximately 43.67 % of the data estimated to be an extra zero. Next, parameter estimates of this model included (1) non-significant effects of <em>language use</em> and <em>source comprehension</em> on inappropriate source use and (2) a significantly negative effect of <em>organizational features</em> on the dependent variable. This suggests that focusing on <em>organizational features</em>, operationalized herein as organization, coherence, development of ideas, and authorial voice, can help L2 test-takers reduce reliance on inappropriate source use. Implications for research and practice are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"64 ","pages":"Article 100934"},"PeriodicalIF":4.2,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143488644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-02-12DOI: 10.1016/j.asw.2025.100922
Xinhua Zhu , Yiwen Sun , Yaping Liu , Wandong Xu , Choo Mui Cheong
{"title":"Towards a better understanding of integrated writing performance: The influence of literacy strategy use and independent language skills","authors":"Xinhua Zhu , Yiwen Sun , Yaping Liu , Wandong Xu , Choo Mui Cheong","doi":"10.1016/j.asw.2025.100922","DOIUrl":"10.1016/j.asw.2025.100922","url":null,"abstract":"<div><div>This study explores the influence mechanism of literacy strategy use and independent language skills (e.g., reading and writing) on integrated writing (IW) performance. 322 Secondary Four students from four schools in Hong Kong completed single-text reading, multiple-text reading, independent writing, and IW tasks, along with questionnaires investigating their reading strategy use and IW strategy use. Path analyses revealed that multiple-text reading and independent writing had comparable significant impacts on IW, mediating the influence of single-text comprehension. In addition, reading strategy use impacted IW indirectly through independent literacy skills and IW strategy use, while IW strategies exerted a direct influence on IW. Our findings underscore the critical role of language skills in mediating the influence of reading strategies on IW performance among young first language (L1) learners. The implications for research and practice, are discussed, emphasizing the complexity of the IW construct and the need for balanced language skills and strategy instruction to enhance IW task performance.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"64 ","pages":"Article 100922"},"PeriodicalIF":4.2,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-02-12DOI: 10.1016/j.asw.2025.100918
Aynur Ismayilli Karakoҫ , Peter Gu , Rachael Ruegg
{"title":"Designing a rating scale for an integrated reading-writing test: A needs-oriented approach","authors":"Aynur Ismayilli Karakoҫ , Peter Gu , Rachael Ruegg","doi":"10.1016/j.asw.2025.100918","DOIUrl":"10.1016/j.asw.2025.100918","url":null,"abstract":"<div><div>To meet the current trends in higher education, there is accountability on EAP programmes to prepare and assess students’ access to higher education. Thus, multimodal tasks including integrated writing (IW) assessments have seen a resurgence because they arguably closely mirror academic writing. However, test practicality constraints and variability in the use and format of these assessments mean rating scales often fall short in substantiating the central claims of IW assessment. We developed an integrated reading-writing scale taking into account reading-writing requirements and empirical research on IW tests designed to assess readiness for first-year humanities and social science courses. We approached test development as part of the ongoing validation efforts, detailing the considerations involved in the scale development process. We argue that alignment with academic writing requirements should guide the development of IW tests, thereby acknowledging and comprehending nuances of academic writing. The paper demonstrates considerations and decisions in scale design as the validation process from the start, which is a reminder that assessment is not just a quantitative exercise but a multifaceted process.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"64 ","pages":"Article 100918"},"PeriodicalIF":4.2,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-02-11DOI: 10.1016/j.asw.2025.100920
Ya Zhang , Zhanhao Jiang
{"title":"Modeling the interplay between teacher support, anxiety and grit in predicting feedback-seeking behavior in L2 writing","authors":"Ya Zhang , Zhanhao Jiang","doi":"10.1016/j.asw.2025.100920","DOIUrl":"10.1016/j.asw.2025.100920","url":null,"abstract":"<div><div>The introduction of feedback-seeking behavior (FSB) into second language (L2) writing has advanced the understanding of the role of learners as proactive seekers rather than passive recipients of feedback. Nevertheless, the existing literature has primarily focused on identifying personal conative factors as antecedents of FSB in L2 writing, often neglecting the impact of environmental, emotional, and personality trait variables. To address this gap, this study recruited 213 English as a foreign language (EFL) learners to examine how an environmental factor (EFL teacher support), a personal emotional factor (anxiety), and a personal personality factor (grit) individually and jointly predict FSB in L2 writing within the Chinese tertiary context. Structural equation modeling (SEM) results revealed that EFL teacher support directly and positively predicted the two dimensions of FSB, viz. feedback monitoring and feedback inquiry. The mediation analysis demonstrated that EFL teacher support indirectly predicted the two dimensions of FSB via the sole mediation of grit and via the chain mediation of grit and anxiety. However, anxiety did not exhibit a significant mediating effect between EFL teacher support and the two dimensions of FSB. The implications for L2 writing instruction are discussed, and potential avenues for future research are identified.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"64 ","pages":"Article 100920"},"PeriodicalIF":4.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-02-10DOI: 10.1016/j.asw.2025.100923
Kim M. Mitchell , Johnson Li , Rasheda Rabbani
{"title":"Validation of the individual and collective self-efficacy scale for teaching writing in post-secondary faculty","authors":"Kim M. Mitchell , Johnson Li , Rasheda Rabbani","doi":"10.1016/j.asw.2025.100923","DOIUrl":"10.1016/j.asw.2025.100923","url":null,"abstract":"<div><div>Faculty actions in the classroom are known to impact student writing self-efficacy and academic achievement. The purpose of this paper was to validate Locke and Johnston’s Individual and Collective Self-Efficacy for Teaching Writing Scales, a tool originally validated in high school teachers, in a new population of post-secondary faculty. Exploratory and confirmatory factor analysis methods were used in two studies with independent samples of multidisciplinary faculty (N = 281) for the exploratory factor analysis (Study 1) and nursing discipline specific faculty (N = 187) for the confirmatory factor analysis (Study 2). Three factors were identified in the questionnaire which maintained the essence of the theoretical structure proposed by Locke and Johnston. Factor 1 was named Context and Process Competencies, Factor 2 Textural Competencies, and Factor 3 Motivational Competencies. This factor structure was confirmed with acceptable goodness of fit in the confirmatory factor analysis Study 2. Learning to be a teacher of writing is a developmental process and this measurement tool has important validation information that speaks to its usefulness in understanding that process.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"64 ","pages":"Article 100923"},"PeriodicalIF":4.2,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2025-02-07DOI: 10.1016/j.asw.2025.100919
Li Xiaosa , Ke Ping
{"title":"How L2 student writers engage with automated feedback: A longitudinal perspective","authors":"Li Xiaosa , Ke Ping","doi":"10.1016/j.asw.2025.100919","DOIUrl":"10.1016/j.asw.2025.100919","url":null,"abstract":"<div><div>Recent qualitative research on L2 students’ use of AWE (automated writing evaluation) feedback reveals that learner engagement is not simply a binary process of accepting or rejecting suggestions; rather, it is influenced by various individual and contextual factors. Building on this foundation, the present study investigates how three Chinese EFL (English as a foreign language) learners at different proficiency levels engaged with feedback from <em>Youdao Writing</em>, a local AWE system, over a 16-week semester. Data were collected through screen-capture recordings, stimulated recalls and semi-structured interviews, focusing on their engagement at the affective, behavioral and cognitive levels. The findings reveal significant individual and longitudinal differences in the students’ experiences, perceptions, and emotional responses. These insights highlight the complexity of student engagement with automated feedback and suggest that instructional practices in EFL contexts should account for these individual and longitudinal differences to enhance the effectiveness of feedback. The study concludes with recommendations for integrating AWE feedback in a way that can foster deeper learner engagement and facilitate writing development.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"64 ","pages":"Article 100919"},"PeriodicalIF":4.2,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143223731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}