Assessing Writing最新文献

筛选
英文 中文
Examining the use of academic vocabulary in first-year ESL undergraduates’ writing: A corpus-driven study in Hong Kong 一年级ESL本科生写作中学术词汇的使用:一项以香港为研究对象的语料库研究
IF 4.2 1区 文学
Assessing Writing Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2024.100913
Edsoulla Chung, Aaron Wan
{"title":"Examining the use of academic vocabulary in first-year ESL undergraduates’ writing: A corpus-driven study in Hong Kong","authors":"Edsoulla Chung,&nbsp;Aaron Wan","doi":"10.1016/j.asw.2024.100913","DOIUrl":"10.1016/j.asw.2024.100913","url":null,"abstract":"<div><div>A good command of academic vocabulary is important for academic success in higher education. However, research has primarily focused on the receptive academic vocabulary knowledge of L2 learners while devoting relatively limited attention to their productive use of such vocabulary and its impact on writing quality. To address this gap, we analysed the problem-solution essays written by 168 first-year undergraduates in Hong Kong, focusing on the relationship between their use of academic words in the Academic Vocabulary List (AVL) and the overall quality of their writing. We also explored the relationship between the size of students’ receptive academic vocabulary and the frequency of its use in writing. Findings revealed that essays with high scores contained a greater density and diversity of academic vocabulary than low-scored essays, with greater frequency of words in the 1–500 and 501–1000 tiers of the AVL significantly predicting better writing quality. The essays also showed a significant relationship between the participants’ receptive academic vocabulary size and the diversity of academic words used in writing. However, no significant relationship was observed between receptive academic vocabulary size and the density of academic words used. We highlight the implications of these findings for EAP teaching and research.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100913"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A meta-analysis of relationships between syntactic features and writing performance and how the relationships vary by student characteristics and measurement features 语法特征与写作表现之间关系的元分析,以及这种关系如何因学生特征和测量特征而变化
IF 4.2 1区 文学
Assessing Writing Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2024.100909
Jiali Wang, Young-Suk G. Kim, Joseph Hin Yan Lam, Molly Ann Leachman
{"title":"A meta-analysis of relationships between syntactic features and writing performance and how the relationships vary by student characteristics and measurement features","authors":"Jiali Wang,&nbsp;Young-Suk G. Kim,&nbsp;Joseph Hin Yan Lam,&nbsp;Molly Ann Leachman","doi":"10.1016/j.asw.2024.100909","DOIUrl":"10.1016/j.asw.2024.100909","url":null,"abstract":"<div><div>Students’ proficiency in constructing sentences impacts the writing process and writing products. Linguistic demands in writing differ in terms of both student characteristics and measurement features. To identify various syntactic demands considering these features, we conducted a meta-analysis examining the relationships between syntactic features (complexity and accuracy) and writing performance (quality, productivity, and fluency) and moderating effects of both student characteristics and measurement features. A total of 109 studies (effect sizes: 871; the total number of participants: 24,628) met the inclusion criteria. Results showed that there was a weak relationship for syntactic accuracy (r = .25) and complexity (r = .16). Writers' characteristics, including grade level and language proficiency, and measurement features, writing genres, writing outcomes, whether the writing task is text-based or not, and type of syntactic complexity measures, were significant moderators for certain syntactic features. The findings highlighted the importance of writer and measurement factors when considering the relationships between linguistic features in writing and writing performance. Implications were discussed regarding the selection of syntactic features in assessing language use in writing, gaps in the literature, and significance for writing instruction and assessment.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100909"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143137019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Volume 63 编辑卷63
IF 4.2 1区 文学
Assessing Writing Pub Date : 2025-01-01 DOI: 10.1016/j.asw.2025.100917
Martin East , David Slomp
{"title":"Editorial Volume 63","authors":"Martin East ,&nbsp;David Slomp","doi":"10.1016/j.asw.2025.100917","DOIUrl":"10.1016/j.asw.2025.100917","url":null,"abstract":"","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"63 ","pages":"Article 100917"},"PeriodicalIF":4.2,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of a genre and topic knowledge activation device on a standardized writing test performance 体裁和主题知识激活装置对标准化写作测试成绩的影响
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100898
Natalia Ávila Reyes , Diego Carrasco , Rosario Escribano , María Jesús Espinosa , Javiera Figueroa , Carolina Castillo
{"title":"Effects of a genre and topic knowledge activation device on a standardized writing test performance","authors":"Natalia Ávila Reyes ,&nbsp;Diego Carrasco ,&nbsp;Rosario Escribano ,&nbsp;María Jesús Espinosa ,&nbsp;Javiera Figueroa ,&nbsp;Carolina Castillo","doi":"10.1016/j.asw.2024.100898","DOIUrl":"10.1016/j.asw.2024.100898","url":null,"abstract":"<div><div>The aim of this article was twofold: first, to introduce a design for a writing test intended for application in large-scale assessments of writing, and second, to experimentally examine the effects of employing a device for activating prior knowledge of topic and genre as a means of controlling construct-irrelevant variance and enhancing validity. An authentic, situated writing task was devised, offering students a communicative purpose and a defined audience. Two devices were utilized for the cognitive activation of topic and genre knowledge: an infographic and a genre model. The participants in this study were 162 fifth-grade students from Santiago de Chile, with 78 students assigned to the experimental condition (with activation device) and 84 students assigned to the control condition (without activation device). The results demonstrate that the odds of presenting good writing ability are higher for students who were part of the experimental group, even when controlling for text transcription ability, considered a predictor of writing. These findings hold implications for the development of large-scale tests of writing guided by principles of educational and social justice.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100898"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and assessing AI-generated and human-produced texts: The case of second language writing teachers 检测和评估人工智能生成的文本和人类制作的文本:以第二语言写作教师为例
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100899
Loc Nguyen , Jessie S. Barrot
{"title":"Detecting and assessing AI-generated and human-produced texts: The case of second language writing teachers","authors":"Loc Nguyen ,&nbsp;Jessie S. Barrot","doi":"10.1016/j.asw.2024.100899","DOIUrl":"10.1016/j.asw.2024.100899","url":null,"abstract":"<div><div>Artificial intelligence (AI) technologies have recently attracted the attention of second language (L2) writing scholars and practitioners. While they recognize the tool’s viability, they also raised the potential adverse effects of these tools on accurately reflecting students’ actual level of writing performance. It is, therefore, crucial for teachers to discern AI-generated essays from human-produced work for more accurate assessment. However, limited information is available about how they assess and distinguish between essays produced by AI and human authors. Thus, this study analyzed the scores and comments teachers gave and looked into their strategies for identifying the source of the essays. Findings showed that essays by a native English-speaking (NS) lecturer and ChatGPT were rated highly. Meanwhile, essays by an NS college student, non-native English-speaking (NNS) college student, and NNS lecturer scored lower, which made them distinguishable from an AI-generated text. The study also revealed that teachers could not consistently identify the AI-generated text, particularly those written by an NS professional. These findings were attributed to teachers’ past engagement with AI writing tools, familiarity with common L2 learner errors, and exposure to native and non-native English writing. From these results, implications for L2 writing instruction and future research are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100899"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative study of voice in Chinese English-major undergraduates’ timed and untimed argument writing 中国英语专业本科生定时与非定时议论文写作中语音的比较研究
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100896
Xiangmin Zeng , Jie Liu , Neil Evan Jon Anthony Bowen
{"title":"A comparative study of voice in Chinese English-major undergraduates’ timed and untimed argument writing","authors":"Xiangmin Zeng ,&nbsp;Jie Liu ,&nbsp;Neil Evan Jon Anthony Bowen","doi":"10.1016/j.asw.2024.100896","DOIUrl":"10.1016/j.asw.2024.100896","url":null,"abstract":"<div><div>As a somewhat elusive and occlusive concept, voice can be a challenging and formidable hurdle for second language (L2) writers. One area that exemplifies this struggle is timed argument writing, where authors must position claims, ideas, and individual perspectives to an existing knowledge base and scholarly community under the confines of time. To enrich our understandings of voice construction in L2 English writers’ timed writing, we explored how 41 Chinese English-major undergraduates deployed authorial voice in two prompt-based argument writing tasks (timed versus untimed). We also sampled their self-reported knowledge, use, and understanding of voice through a survey-based instrument. To compare the quantity and quality of voice construction between the two samples, we measured 10 voice categories, three voice dimensions, and overall voice strength. Results showed that only two categories displayed statistically significant differences in terms of frequencies, but all three voice dimensions and overall voice strength scored significantly higher in untimed writing samples. Based on the results of our text analysis and survey, we further highlight the complexities of voice in L2 writing, provide evidence in support of existing voice rubrics, and make practical suggestions for teaching and assessing voice in writing.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100896"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of task duration on the scoring of independent writing responses of adult L2-English writers 任务持续时间对英语为第二语言的成年写作者独立写作回答评分的影响
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100895
Ben Naismith , Yigal Attali , Geoffrey T. LaFlair
{"title":"The impact of task duration on the scoring of independent writing responses of adult L2-English writers","authors":"Ben Naismith ,&nbsp;Yigal Attali ,&nbsp;Geoffrey T. LaFlair","doi":"10.1016/j.asw.2024.100895","DOIUrl":"10.1016/j.asw.2024.100895","url":null,"abstract":"<div><div>In writing assessment, there is inherently a tension between authenticity and practicality: tasks with longer durations may more closely reflect real-life writing processes but are less feasible to administer and score. What is more, given total testing time, there is necessarily a trade-off between task duration and number of tasks. Traditionally, high-stakes assessments have managed this trade-off by administering one or two writing tasks each test, allowing 20–40 minutes per task. However, research on second language (L2) English writing has not found longer task durations to significantly improve score validity or reliability. Importantly, very few studies have compared much shorter durations for writing tasks to more traditional allotments. To explore this issue, we asked adult L2-English test takers to respond to two writing prompts with either 5-minute or 20-minute time limits. Responses were then evaluated by expert human raters and an automated writing evaluation tool. Regardless of scoring method, short duration scores evidenced equally high test-retest reliability and criterion validity as long duration scores. As expected, longer task duration yielded higher scores, but regardless of duration, test takers demonstrated the entire spectrum of writing proficiency. Implications for writing assessment are discussed in relation to scoring practices and task design.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100895"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A structural equation investigation of linguistic features as indices of writing quality in assessed secondary-level EMI learners’ scientific reports 以语言特征为指标,对中学英语学习者科学报告写作质量的结构方程研究
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100897
Jack Pun , Wangyin Kenneth Li
{"title":"A structural equation investigation of linguistic features as indices of writing quality in assessed secondary-level EMI learners’ scientific reports","authors":"Jack Pun ,&nbsp;Wangyin Kenneth Li","doi":"10.1016/j.asw.2024.100897","DOIUrl":"10.1016/j.asw.2024.100897","url":null,"abstract":"<div><div>While inquiry into the relationship between linguistic features and L2 writing quality has been a long-standing line of research, little scholarly attention has been drawn to the predictive value of linguistic features in assessing the writing quality of English-medium scientific report writing. This study adds to the existing literature by examining the relation of lexical and syntactic complexity to writing quality, based on 106 scientific reports composed by Hong Kong Chinese learners of English in EMI secondary schools. Natural language processing tools were employed to extract computational indices of linguistic complexity features, followed by the use of a structural equation modeling (SEM) approach to investigate their predictive power. The validity of the anticipated construct was confirmed based upon several goodness-of-fit criteria. The SEM analysis indicated that writing quality was predicted by lexical sophistication (i.e., text-based complexity: word range and academic words; psycholinguistic complexity: word familiarity and age-of-acquisition ratings), lexical diversity (i.e., MTLD and VocD), and syntactic complexity (i.e., mean length of sentence and dependent clauses per T-unit). However, the relation of lexical diversity and syntactic complexity to writing quality was mediated by lexical sophistication. Implications for scientific report writing assessment and pedagogy in EMI educational contexts are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100897"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validating an integrated reading-into-writing scale with trained university students 通过训练有素的大学生验证 "从阅读到写作 "综合量表
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-09-24 DOI: 10.1016/j.asw.2024.100894
Claudia Harsch , Valeriia Koval , Paraskevi (Voula) Kanistra , Ximena Delgado-Osorio
{"title":"Validating an integrated reading-into-writing scale with trained university students","authors":"Claudia Harsch ,&nbsp;Valeriia Koval ,&nbsp;Paraskevi (Voula) Kanistra ,&nbsp;Ximena Delgado-Osorio","doi":"10.1016/j.asw.2024.100894","DOIUrl":"10.1016/j.asw.2024.100894","url":null,"abstract":"<div><div>Integrated tasks are often used in higher education (HE) for diagnostic purposes, with increasing popularity in lingua franca contexts, such as German HE, where English-medium courses are gaining ground. In this context, we report the validation of a new rating scale for assessing reading-into-writing tasks. To examine scoring validity, we employed Weir’s (2005) socio-cognitive framework in an explanatory mixed-methods design. We collected 679 integrated performances in four summary and opinion tasks, which were rated by six trained student raters. They are to become writing tutors for first-year students. We utilized a many-facet Rasch model to investigate rater severity, reliability, consistency, and scale functioning. Using thematic analysis, we analyzed think-aloud protocols, retrospective and focus group interviews with the raters. Findings showed that the rating scale overall functions as intended and is perceived by the raters as valid operationalization of the integrated construct. FACETS analyses revealed reasonable reliabilities, yet exposed local issues with certain criteria and band levels. This is corroborated by the challenges reported by the raters, which they mainly attributed to the complexities inherent in such an assessment. Applying Weir’s (2005) framework in a mixed-methods approach facilitated the interpretation of the quantitative findings and yielded insights into potential validity threads.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100894"},"PeriodicalIF":4.2,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000874/pdfft?md5=73c505eab3803fbf3a3edfd0612d454a&pid=1-s2.0-S1075293524000874-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the SSARC model of task sequencing: Assessing L2 writing development 了解任务排序的 SSARC 模型:评估 L2 写作发展
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-09-23 DOI: 10.1016/j.asw.2024.100893
Mahmoud Abdi Tabari , Yizhou Wang , Michol Miller
{"title":"Understanding the SSARC model of task sequencing: Assessing L2 writing development","authors":"Mahmoud Abdi Tabari ,&nbsp;Yizhou Wang ,&nbsp;Michol Miller","doi":"10.1016/j.asw.2024.100893","DOIUrl":"10.1016/j.asw.2024.100893","url":null,"abstract":"<div><div>This study aimed to explore the impact of task sequencing on the development of second language (L2) writing and investigate how L2 learners performed on three decision-making writing tasks completed in different orders over nine weeks. 120 advanced-high EFL students were randomly assigned to one of three groups and given different task sequences: 1) a simple-medium-complex (SMC) sequence, 2) a complex-medium-simple sequence (CMS), or 3) a random sequence (RDM). Essays were analyzed using measures of syntactic complexity, accuracy, lexical complexity, and fluency (CALF). Results showed that the CALF of L2 writing demonstrated longitudinal development over time in all three task sequencing groups. CALF development was not immediately apparent in the first six weeks, with most measures displaying a significant increase by the end of the ninth week. Furthermore, different task sequences resulted in varying patterns and magnitudes of CALF growth, but no specific sequence was found to be superior overall.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100893"},"PeriodicalIF":4.2,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信