Assessing WritingPub Date : 2024-05-28DOI: 10.1016/j.asw.2024.100846
Qin Xie
{"title":"Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test","authors":"Qin Xie","doi":"10.1016/j.asw.2024.100846","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100846","url":null,"abstract":"<div><p>This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of <em>Duolingo English Test</em> (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100846"},"PeriodicalIF":3.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000394/pdfft?md5=1959b9ed8a9acc732d6a5985fba62520&pid=1-s2.0-S1075293524000394-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-05-16DOI: 10.1016/j.asw.2024.100847
Yuxin Hao , Xuelin Wang , Shuai Bin , Qihao Yang , Haitao Liu
{"title":"How syntactic complexity indices predict Chinese L2 writing quality: An analysis of unified dependency syntactically-annotated corpus","authors":"Yuxin Hao , Xuelin Wang , Shuai Bin , Qihao Yang , Haitao Liu","doi":"10.1016/j.asw.2024.100847","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100847","url":null,"abstract":"<div><p>Previous syntactic complexity (SC) research on L2 Chinese has overlooked a range of Chinese-specific structures and fine-grained indices. This study, utilizing a syntactically annotated Chinese L2 writing corpus, simultaneously employs both large-grained and fine-grained syntactic complexity indices to investigate the relationship between syntactic complexity and writing quality produced by English-speaking Chinese second language (ECSL) learners from macro and micro perspectives. Our findings reveal the following: (a) at a large-grained level of analysis using syntactic complexity indices, the generic syntactic complexity indice (GSC indice) number of T-units per sentence and the Chinese-specific syntactic complexity indice (CSC indice) number of Clauses per topic chain unit account for 14.5% of the total variance in writing scores among ECSL learners; (b) the syntactic diversity model alone accounts for 24.7% of the variance in Chinese writing scores among ECSL learners; (c) the stepwise regression analysis model, which integrates fine-grained SC indices extracted from the syntactically annotated corpus, explains 43.7% of the variance in Chinese writing quality. This model incorporates CSC indices such as average ratio of dependency types per 30 dependency segments, the ratio of adjuncts to sentence end, the ratio of predicate complements, the ratio of numeral adjuncts, the mean length of Topic-Comment-Unit dependency distance, as well as GSC indices like the ratio of main governors, the ratio of attributers, the ratio of coordinating adjuncts, and the ratio of sentential objects. These findings highlight the valuable insights that syntactically annotated fine-grained SC indices offer regarding the writing characteristics of ECSL learners.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100847"},"PeriodicalIF":3.9,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140952436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-04-01DOI: 10.1016/j.asw.2024.100830
Michael Laudenbach, David West Brown, Zhiyu Guo, Suguru Ishizaki, Alex Reinhart, Gordon Weinberg
{"title":"Visualizing formative feedback in statistics writing: An exploratory study of student motivation using DocuScope Write & Audit","authors":"Michael Laudenbach, David West Brown, Zhiyu Guo, Suguru Ishizaki, Alex Reinhart, Gordon Weinberg","doi":"10.1016/j.asw.2024.100830","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100830","url":null,"abstract":"<div><p>Recently, formative feedback in writing instruction has been supported by technologies generally referred to as Automated Writing Evaluation tools. However, such tools are limited in their capacity to explore specific disciplinary genres, and they have shown mixed results in student writing improvement. We explore how technology-enhanced writing interventions can positively affect student attitudes toward and beliefs about writing, both reinforcing content knowledge and increasing student motivation. Using a student-facing text-visualization tool called <em>Write & Audit</em>, we hosted revision workshops for students (n = 30) in an introductory-level statistics course at a large North American University. The tool is designed to be flexible: instructors of various courses can create expectations and predefine topics that are genre-specific. In this way, students are offered non-evaluative formative feedback which redirects them to field-specific strategies. To gauge the usefulness of Write & Audit, we used a previously validated survey instrument designed to measure the construct model of student motivation (Ling et al. 2021). Our results show significant increases in student self-efficacy and beliefs about the importance of content in successful writing. We contextualize these findings with data from three student think-aloud interviews, which demonstrate metacognitive awareness while using the tool. Ultimately, this exploratory study is non-experimental, but it contributes a novel approach to automated formative feedback and confirms the promising potential of Write & Audit.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100830"},"PeriodicalIF":3.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000230/pdfft?md5=7f031636dffbbdcdb70229b30498cf92&pid=1-s2.0-S1075293524000230-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140330991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-04-01DOI: 10.1016/j.asw.2024.100841
Madhu Neupane Bastola , Guangwei Hu
{"title":"Engagement with supervisory feedback on master’s theses: Do supervisors and students see eye to eye?","authors":"Madhu Neupane Bastola , Guangwei Hu","doi":"10.1016/j.asw.2024.100841","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100841","url":null,"abstract":"<div><p>Student engagement has attracted much research attention in higher education because of various potential benefits associated with improved engagement. Despite extensive research on student engagement in higher education, little has been written about graduate students’ engagement with supervisory feedback. This paper reports on a study on student engagement with supervisory feedback on master’s theses conducted in the context of Nepalese higher education. The study employed an exploratory sequential mixed-methods design that drew on interviews and a questionnaire-based survey involving supervisors and students from four disciplines at a comprehensive university in Nepal. Analyses of the qualitative and quantitative data revealed significant differences between supervisors’ and students’ perceptions of all types (i.e., affective, cognitive, and behavioral) of student engagement. Significant disciplinary variations were also observed in supervisors’ and students’ perceptions of negative affect, cognitive engagement, and behavioral engagement. Furthermore, disciplinary background and feedback role interacted to shape perceptions of student engagement. These findings have implications for improving student engagement with supervisory feedback.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100841"},"PeriodicalIF":3.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140644684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linguistic factors affecting L1 language evaluation in argumentative essays of students aged 16 to 18 attending secondary education in Greece","authors":"Koskinas Emmanouil , Gavriilidou Zoe , Andras Christos , Angelos Markos","doi":"10.1016/j.asw.2024.100844","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100844","url":null,"abstract":"<div><p>The purpose of this paper is to investigate linguistic factors affecting the evaluation of the argumentative essays in written tests taken by junior and senior students, aged 16 to 18, attending high schools in Greece. To achieve this, we analyzed textual characteristics and scoring of 265 juniors and seniors, graded by 15 different raters. To examine the contribution of linguistic parameters to the assessment, we developed an automated tool to record and evaluate students' lexical and syntactic features in the Greek language. The results revealed that the extensive use of nominal groups including an adjective and a noun and the utilization of both impersonal and passive syntax, as well as adverbs to a lesser extent, contribute the most to positive grading in language tests. Furthermore, we identified a correlation between language and the other criteria of the evaluation rubric, namely content and organization. The paper contributes to the discussion about objectivity in writing evaluation in the Greek setting and to the creation of a rubric that ensures a more effective assessment of writing tasks.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100844"},"PeriodicalIF":3.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140822444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-04-01DOI: 10.1016/j.asw.2024.100843
Peter Thwaites , Charalambos Kollias , Magali Paquot
{"title":"Is CJ a valid, reliable form of L2 writing assessment when texts are long, homogeneous in proficiency, and feature heterogeneous prompts?","authors":"Peter Thwaites , Charalambos Kollias , Magali Paquot","doi":"10.1016/j.asw.2024.100843","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100843","url":null,"abstract":"<div><p>Comparative judgement (CJ) is a method of assessment in which judges perform paired comparisons of pieces of student work and decide which one is “better”. CJ has many potential benefits for the writing assessment community, including its reliability, flexibility, and efficiency. However, by reviewing the literature on CJ’s application to L2 writing assessment, we find that while existing studies have established the plausibility of using CJ in this context, they provide little indication of the conditions under which the method is most likely to prove useful. In particular, by focusing on the assessment of relatively short texts, covering a wide proficiency range, and using a single essay prompt, they leave unresolved the question of how such textual factors affect CJ’s reliability and validity. To address this, we conduct two studies exploring the reliability and validity of a community-driven form of CJ for evaluating L2 texts which were longer, featured a narrower proficiency range, and were more topically diverse than earlier studies. Our results suggest that CJ remains reliable under these conditions. In addition, comparison with rubric-based assessment using CEFR scales suggests that the CJ approach also has an acceptable level of validity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100843"},"PeriodicalIF":3.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140950879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-04-01DOI: 10.1016/j.asw.2024.100839
Ghulam Abbas Khushik
{"title":"Is the variation in syntactic complexity features observed in argumentative essays produced by B1 level EFL learners in Finland and Pakistan attributable exclusively to their L1?","authors":"Ghulam Abbas Khushik","doi":"10.1016/j.asw.2024.100839","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100839","url":null,"abstract":"<div><p>This study has explored the syntactic complexity features of English learners at the B1 Common European Framework of Reference (CEFR) (CoE, 2001) level from both Pakistan and Finland. The learners in question were taught English as a Foreign Language (EFL) using different pedagogical methods. This study took into account various factors including the learners' proficiency level, age, and grade, as well as variations in their native language. To assess the impact of the learners' native language and pedagogical methods on syntactic complexity features, twelfth grade EFL students from Upper-Secondary schools in both nations were given identical instructions and time limits to complete an English academic essay on the same topic. The study utilized L2 syntactic complexity analyzer (L2SCA) to extract fourteen syntactic complexity features, and Mann-Whitney U Tests were used to analyze the differences in the syntactic complexity features between the two groups. The study has revealed significant differences between Finnish and Pakistani EFL learners due to variations in their native language and the effects of pedagogical methods on syntactic complexity features. The implications of this study extend to language testing and assessment, the CEFR framework, and pedagogy in both Finland and Pakistan.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100839"},"PeriodicalIF":3.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000321/pdfft?md5=4346b93938ba0697c15284a2baa8cd72&pid=1-s2.0-S1075293524000321-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140540563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-04-01DOI: 10.1016/j.asw.2024.100845
Choo Mui Cheong , Yaping Liu , Run Mu
{"title":"Characteristics of students’ task representation and its association with argumentative integrated writing performance","authors":"Choo Mui Cheong , Yaping Liu , Run Mu","doi":"10.1016/j.asw.2024.100845","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100845","url":null,"abstract":"<div><p>Task representation denotes students’ interpretation in which what a learning or assessment task required them to do. An argumentative integrated writing task which involves the use of reading materials as claims or evidences for composing an essay, makes the role of task representation more critical than others, as writers may be confused with whether their task is to focus on synthesizing the reading materials that they comprehend, or expressing their own views. With the aim of exploring the characteristics of task representation and its association with integrated writing, this study invited 474 secondary four students from Hong Kong to participate in think aloud writing protocol followed by stimulated recall interview (36 participants), and complete an integrated writing task and a questionnaire (438 participants). Three factors of the task representation were identified as source use, rhetorical purpose and text format, and significant positive correlations were found between the three factors and integrated writing performance. Theoretical and pedagogical implications are discussed.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"60 ","pages":"Article 100845"},"PeriodicalIF":3.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000382/pdfft?md5=8d2330746d772d427d894dfc9d49c0a7&pid=1-s2.0-S1075293524000382-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140816407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 3.9 1区 文学
Assessing WritingPub Date : 2024-04-01DOI: 10.1016/j.asw.2024.100837