Assessing WritingPub Date : 2026-01-01Epub Date: 2025-12-04DOI: 10.1016/j.asw.2025.100998
Na Tao , Ying Wang
{"title":"Assessing the effects of task complexity on cognitive demands in L2 writing","authors":"Na Tao , Ying Wang","doi":"10.1016/j.asw.2025.100998","DOIUrl":"10.1016/j.asw.2025.100998","url":null,"abstract":"<div><div>The assessment of task-generated cognitive demands has been receiving increasing attention in task complexity research. However, scant attention has been paid to assessing cognitive demands when task complexity is manipulated along both resource-directing and resource-dispersing dimensions. To address this gap, the present study aimed to investigate the relative effects of reasoning demands and prior knowledge on cognitive demands in L2 writing. Eighty-eight EFL students completed two letter-writing tasks with varying reasoning demands under one of two conditions, that is, either with prior knowledge available or without prior knowledge available. Cognitive demands were assessed by the post-task questionnaire, the dual-task method and the open-ended questions. The results revealed that reasoning demands and prior knowledge were strong determinants of cognitive demands, which provided empirical evidence for Robinson’s Cognition Hypothesis. Moreover, the post-task questionnaire, the dual-task method and open-ended questions were found to assess distinct aspects of cognitive demands, which highlighted the importance of data triangulation in exploring task complexity effects. The study provides language teachers and assessors with implications for task design and implementation.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 100998"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of a Genre Adherence Rubric (GAR) for applied linguistics research articles","authors":"Mahsa Alinasab , Javad Gholami , Zhila Mohammadnia","doi":"10.1016/j.asw.2025.100991","DOIUrl":"10.1016/j.asw.2025.100991","url":null,"abstract":"<div><div>This study reports the development of a four-descriptor Genre Adherence Rubric (GAR) for Research Articles (RAs) based on two pilot studies. To this end, we designed, implemented, and assessed a genre-oriented RA writing course in a master's program in applied linguistics. The instructional package contained knowledge-giving and hands-on materials and tasks on moves/steps, their sequencing, and linguistic features in RA sections. The participants were asked to revise their first draft RAs following the course. We defined, developed, and piloted the GAR, which primarily consisted of move obligation, optionality, and sequencing, and used it to rate the original and revised RAs. The first pilot and scorer feedback showed that language needs to be included as an additional descriptor. In the second pilot study, implementing the four-prong GAR yielded meaningful differences in another set of revised RAs. As a novel attempt to rate RAs and similar scholarly writings through genre lenses and apart from opening new avenues for research, the GAR presented in this paper warrants further confirmation or modification. Given the ever-growing importance of scholarly writing and publishing, the findings have tenable implications for journal editors, publishers, and academic writing instructors to adopt or adapt GAR-like RA appraisal scales.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 100991"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145521191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2026-01-01Epub Date: 2025-11-11DOI: 10.1016/j.asw.2025.100990
Jessie S. Barrot
{"title":"Generative artificial intelligence for automated essay scoring: Exploring teacher agency through an ecological perspective","authors":"Jessie S. Barrot","doi":"10.1016/j.asw.2025.100990","DOIUrl":"10.1016/j.asw.2025.100990","url":null,"abstract":"<div><div>Generative artificial intelligence (AI) is increasingly used in writing assessment, particularly for automated essay scoring (AES) and for generating formative feedback within automated writing evaluation (AWE). While AI-driven AES enhances efficiency and consistency, concerns regarding accuracy, bias, and ethical implications raise critical questions about its role in assessment. This paper examines the impact of generative AI on teacher agency through an ecological perspective, which considers agency as shaped by personal, institutional, and sociocultural factors. The analysis highlights the need for teachers to critically mediate AI-generated scores and feedback to align them with pedagogical goals, ensuring AI functions as an assistive tool rather than a determinant of assessment outcomes. Although AI can streamline assessment, over-reliance risks diminishing teachers’ evaluative expertise and reinforcing biases embedded in AI systems. Ethical concerns, including transparency, data privacy, and fairness, further complicate its adoption. To address these challenges, this paper proposes a framework for responsible AI integration that prioritizes bias mitigation, data security, and teacher-driven decision-making. The discussion concludes with pedagogical implications and directions for future research on AI-assisted writing assessment.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 100990"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145486179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2026-01-01Epub Date: 2026-01-30DOI: 10.1016/j.asw.2026.101019
Xi Li , Mo Chen
{"title":"Assessing the effects of explicit coherence instruction on EFL students’ integrated writing performance","authors":"Xi Li , Mo Chen","doi":"10.1016/j.asw.2026.101019","DOIUrl":"10.1016/j.asw.2026.101019","url":null,"abstract":"<div><div>As a key attribute of effective writing, coherence remains challenging to teach in language classrooms, with traditional writing instruction frequently overlooking coherence in favor of discrete, rule-based features. This mixed-methods study investigates the effectiveness of explicit coherence instruction on English-as-a-Foreign-Language (EFL) students’ performance on integrated writing tasks. The study employed a controlled experimental design with 64 upper-intermediate-level undergraduate students at a Chinese university, drawing on Hasan’s Cohesive Harmony theory as the theoretical framework. Half of the participants (n = 32) in the experimental group received explicit instruction on coherence with a focus on cohesive chains and cohesive devices in integrated writing, while the control group (n = 32) received standard paraphrasing instruction. Quantitative analysis revealed that the experimental group showed significant improvements in coherence scores and multiple cohesive chain measures. Qualitative discourse analysis of six students’ writing samples from the experimental group demonstrated varying levels of improvement in writing coherence, with high-performing students showing better use of identity chains and pronoun references. The findings revealed that explicit instruction on coherence significantly improved students’ performance in creating coherent integrated writing, particularly through the development of cohesive chains and appropriate use of cohesive devices. This study underscores the pedagogical value of teaching coherence to enhance writing quality and provides concrete strategies for developing more effective teaching approaches for integrated writing tasks in EFL contexts.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 101019"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2026-01-01Epub Date: 2025-12-02DOI: 10.1016/j.asw.2025.100994
Honglan Wang , Jookyoung Jung
{"title":"The effects of online resource use on L2 learners’ computer-mediated writing processes and written products","authors":"Honglan Wang , Jookyoung Jung","doi":"10.1016/j.asw.2025.100994","DOIUrl":"10.1016/j.asw.2025.100994","url":null,"abstract":"<div><div>While previous studies on online resource use in L2 writing have focused on the overall writing quality, limited attention has been paid to its effects on linguistic complexity and real-time writing processes. Addressing this gap, the present study explored how online resource use influences both the processes and products of L2 writing. Forty-nine intermediate L2 learners completed two computer-mediated argumentative writing tasks, either with or without the use of online resources. Writing behaviors were captured via keystroke logging and screen recording, and analyzed for search activity, fluency, pausing, and revision quantity. Cognitive processes were examined through stimulated recall interviews, and written products were evaluated for both quality and linguistic complexity. The results showed that participants spent an average of 14 % of task time using online resources, with considerable individual variation. Mixed-effects modeling revealed that resource use facilitated the production of more sophisticated words, with marginal influence on writing quality or syntactic complexity. Resource use was also associated with longer between-word pauses, fewer within-word pauses, and reduced revisions. These findings highlight the potential of online resource use to enhance the authenticity of L2 writing assessment tasks without compromising test validity, while encouraging the use of more advanced vocabulary in writing.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 100994"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2026-01-01Epub Date: 2025-12-06DOI: 10.1016/j.asw.2025.100997
Hyunwoo Kim , Haerim Hwang
{"title":"Verb-centric or balanced?: An NLP-based assessment of word class contributions to L2 writing proficiency","authors":"Hyunwoo Kim , Haerim Hwang","doi":"10.1016/j.asw.2025.100997","DOIUrl":"10.1016/j.asw.2025.100997","url":null,"abstract":"<div><div>Despite the significant role of verbs in second language (L2) development, few studies have explicitly tested the comparative role of verbs in predicting L2 writing proficiency compared to other lexical categories, such as adjectives, adverbs, and nouns. Motivated by the theoretical and linguistic prominence of verbs, this study examines whether verbs serve as stronger predictors of L2 writing proficiency than other word classes. Our category-based analysis of lexical diversity and sophistication features in argumentative essays showed that verbs play a distinct and prominent role in L2 writing, outperforming other lexical categories as predictors of L2 proficiency. Specifically, higher-proficiency argumentative essays were found to exhibit a greater variety of verbs that are infrequent, academically oriented, less familiar, and acquired later in language development. At theoretical level, our findings align with the perspectives emphasizing the crucial role of verbs in language development. Pedagogically, our findings highlight the need for category-based vocabulary instruction in L2 writing classes.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 100997"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unacclimatized?: Understanding the potential of labor-based contract grading interventions in Chinese EFL writing contexts","authors":"Chenggang Liang , Shulin Yu , Nan Zhou , Feng Geng","doi":"10.1016/j.asw.2025.100993","DOIUrl":"10.1016/j.asw.2025.100993","url":null,"abstract":"<div><div>Despite the exuberant scholarly discussions on labor-based contract grading as an alternative to traditional high-stakes writing assessment, few empirical studies have explored its effects on students’ writing outcomes, especially in L2 writing contexts. Set against the context of EFL writing classes at a Chinese university, the study examined the effectiveness of the labor-based contract grading intervention on both academic and affective outcomes in L2 writing over one semester. Data were collected from 160 students (72 in the control group and 88 in the experimental group; <em>M</em><sub><em>age</em></sub> = 18.60, <em>SD</em> =.68, 48.13 % female) before and after the intervention. Results of repeated measures MANOVA revealed significant differences in students’ writing academic outcomes between two groups. Specifically, the labor-based contract grading group showed less improvement in L2 writing performance as compared to the traditional high-stakes writing assessment. Moreover, no significant differences were observed in students’ affective writing outcomes across two groups. The findings highlight the importance of considering contextual factors when implementing labor-based contract grading in diverse writing contexts. Relevant implications and directions for future research are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 100993"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2026-01-01Epub Date: 2025-12-17DOI: 10.1016/j.asw.2025.101009
Jia Li , Lawrence Jun Zhang
{"title":"Unveiling the antecedents of feedback-seeking behavior in L2 writing: The impact of future L2 writing selves and emotions","authors":"Jia Li , Lawrence Jun Zhang","doi":"10.1016/j.asw.2025.101009","DOIUrl":"10.1016/j.asw.2025.101009","url":null,"abstract":"<div><div>While existing research on second or foreign (L2) feedback has predominantly focused on the effectiveness of various feedback practices and their impacts on writing performance, limited attention has been devoted to learners’ proactive role in seeking feedback, and how this important yet underexplored construct correlates with conative and affective variables remains insufficiently examined. To help fill that void, we sought to explore the concept of feedback-seeking behavior and its antecedents in L2 writing by examining the correlations with future L2 writing selves and emotions, particularly unpacking the mediating effect of emotions in the emotion-driven chain of “motivation→emotion→increased or decreased behavior” among 225 undergraduate English major students. Structural equation modeling unveiled that ideal and ought-to L2 writing selves directly and significantly influenced emotions, and emotions impacted the two dimensions of feedback-seeking behavior significantly. More importantly, ideal L2 writing self indirectly influenced feedback monitoring and feedback inquiry through the mediation of writing enjoyment. Nevertheless, writing boredom exercised no significant mediating effect on future L2 selves and feedback-seeking behavior. These findings reinforced the learner-centered perspective that positions students as proactive agents and provide some notable implications for L2 writing instruction to advance our understanding of teacher feedback.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 101009"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145789943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2026-01-01Epub Date: 2025-12-13DOI: 10.1016/j.asw.2025.101010
Ke Li , Ying Hong , Chen Hao
{"title":"Beyond the page: A multimodal self-efficacy framework for assessing L2 digital-academic writing","authors":"Ke Li , Ying Hong , Chen Hao","doi":"10.1016/j.asw.2025.101010","DOIUrl":"10.1016/j.asw.2025.101010","url":null,"abstract":"<div><div>As academic writing becomes increasingly digital and multimodal, traditional assessments of L2 writing self-efficacy—centered exclusively on print-based, monomodal tasks—inadequately represent learners' confidence in contemporary composition practices. While existing self-efficacy instruments have advanced our understanding of writing beliefs, including recent multidimensional scales and emerging work on multimodal processes, no validated tool specifically measures L2 writers' perceived efficacy in performing integrated multimodal academic writing tasks. This gap is particularly consequential as EAP assessments increasingly require students to synthesize audio, visual, and textual sources within digitally mediated writing environments—demands that extend beyond the constructs captured by existing scales. This study introduces and validates the Multimodal Academic Writing Self-Efficacy Scale (MAWSE), designed to assess L2 writers' beliefs about their capabilities in four theoretically distinct dimensions: digital content comprehension (interpreting multimodal sources), multimodal discourse synthesis (integrating cross-modal information), genre and format navigation (adapting to multimodal conventions), and self-regulation across digital platforms (managing multimodal composing processes). Using a sequential mixed-methods design, we gathered data from 1063 EFL university students across three institutional contexts in China. Qualitative think-aloud protocols informed item development, while exploratory and confirmatory factor analyses established construct validity. Multi-group invariance testing confirmed measurement equivalence across key and non-key university groups, and structural equation modeling revealed that MAWSE scores significantly predicted performance on scenario-based academic writing tasks involving integrated multimodal sources (β = 0.41, p < .01). The findings offer a psychometrically robust, construct-relevant tool that extends existing self-efficacy frameworks into the multimodal domain, addressing an empirical gap while supporting more accurate and equitable assessment practices in digitally enriched EAP contexts. This research contributes to evolving scholarship by validating an instrument aligned with the realities of contemporary academic communication.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"67 ","pages":"Article 101010"},"PeriodicalIF":5.5,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}