Assessing WritingPub Date : 2024-07-01DOI: 10.1016/j.asw.2024.100865
S.A. Crossley , Y. Tian , P. Baffour , A. Franklin , M. Benner , U. Boser
{"title":"A large-scale corpus for assessing written argumentation: PERSUADE 2.0","authors":"S.A. Crossley , Y. Tian , P. Baffour , A. Franklin , M. Benner , U. Boser","doi":"10.1016/j.asw.2024.100865","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100865","url":null,"abstract":"<div><p>This research methods article introduces the open source PERSUADE 2.0 corpus. The PERSUADE 2.0 corpus comprises over 25,000 argumentative essays produced by 6th-12th grade students in the United States for 15 prompts on two writing tasks: independent and source-based writing. The PERSUADE 2.0 corpus also provides detailed individual and demographic information for each writer. The goal of the PERSUADE 2.0 corpus is to advance research into relationships between discourse elements, their effectiveness, writing quality, writing tasks and prompts, and demographic and individual differences.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100865"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000588/pdfft?md5=10d16ed8c4682e0e6cfee1fadb38e0bd&pid=1-s2.0-S1075293524000588-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141487094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-07-01DOI: 10.1016/j.asw.2024.100870
Youmie J. Kim, Matthew J. Hammill
{"title":"Influence of prior educational contexts on directed self-placement of L2 writers","authors":"Youmie J. Kim, Matthew J. Hammill","doi":"10.1016/j.asw.2024.100870","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100870","url":null,"abstract":"<div><p>Directed self-placement (DSP) allows for student agency in writing placement. DSP has been implemented in many composition programs, although it has not been used as widely for L2 writers in higher education. This study investigates the relationship between student placement decisions and students’ prior educational backgrounds, particularly in relationship to whether they had attended an English-medium high school or an intensive English program (IEP). Actual placement results via an exam were compared to 804 students’ self-placement decisions and correlated with their prior educational backgrounds. Findings indicated that most students’ DSP decisions matched actual exam placement results. However, there was a large number of DSP decisions that were higher or lower than exam placement results. Additionally, the longer students studied at an English-medium instruction high school, the more likely they were to place themselves higher than their exam placement. We conclude that DSP can be used in L2 writing programs, but with careful attention to learners’ educational backgrounds, proficiency, and sense of identity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100870"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-07-01DOI: 10.1016/j.asw.2024.100873
Kelly Hartwell , Laura Aull
{"title":"\"Navigating innovation and equity in writing assessment\"","authors":"Kelly Hartwell , Laura Aull","doi":"10.1016/j.asw.2024.100873","DOIUrl":"10.1016/j.asw.2024.100873","url":null,"abstract":"<div><p>The 2024 Tools & Technology forum underscores the significant role of emerging writing technologies in shaping writing assessment practices post-COVID-19, emphasizing the necessity of ensuring that these innovations uphold core principles of validity, fairness, and equity. AI-driven tools offer promising improvements but also require careful consideration to ensure that they reflect writing constructs, align with educational goals, and promote equitable assessment practices. Validity is explored through dimensions such as construct, content, and consequential validity, raising questions about how assessment tools may capture the complexity of writing and their broader impacts on educational stakeholders. Fairness in writing assessment is examined with regard to cultural responsiveness and accessibility, and how assessment tools may be designed to accommodate various student needs. Equity extends these considerations by addressing systemic inequities and promoting assessment practices that support diverse learning styles and reduce barriers for marginalized students. The reviews of three assessment tools—PERSUADE 2.0, EvaluMate, and a web application for systematic review writing—illustrate how innovations can support valid, fair, and equitable writing assessments across educational contexts. The forum emphasizes the importance of ongoing dialogue and adaptation to create inclusive and just educational experiences.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100873"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of peer feedback in English writing classes on EFL students’ writing feedback literacy","authors":"Fanrong Weng , Cecilia Guanfang Zhao , Shangwen Chen","doi":"10.1016/j.asw.2024.100874","DOIUrl":"10.1016/j.asw.2024.100874","url":null,"abstract":"<div><p>Despite the increasing scholarly attention towards students’ writing feedback literacy in recent years, empirical explorations of effective approaches to enhancing this capacity remain scarce. While peer feedback often plays an important role in English as a Foreign Language (EFL) writing development, few studies seem to have addressed the potential impacts of peer feedback activities on students’ overall writing feedback literacy. To fill this gap, a mixed-methods study was designed to investigate the effect of peer feedback activities on students’ writing feedback literacy development across such dimensions as appreciating feedback, making judgements, acknowledging different sources of feedback, managing affect, and taking actions with feedback. Two intact classes, one as the experimental group and the other control group, participated in the study. The experimental group engaged in peer feedback activities during the semester (12 weeks), whereas the control group received conventional teacher feedback only. The pre- and post-intervention results based on a writing feedback literacy scale were compared between the two groups, in addition to the analysis of interviews with the teacher and focal students from the experimental group, as well as students’ written assignments and revisions after receiving peer feedback. Results showed that peer feedback activities could significantly improve students’ appreciation of feedback and their ability to make judgements. Nevertheless, no significant changes in other dimensions were identified. These findings extend the current understanding of EFL students’ writing feedback literacy and hold valuable pedagogical implications.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100874"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141852395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-06-17DOI: 10.1016/j.asw.2024.100863
Muhammad M.M. Abdel Latif , Zainab Alsuhaibani , Asma Alsahil
{"title":"Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices","authors":"Muhammad M.M. Abdel Latif , Zainab Alsuhaibani , Asma Alsahil","doi":"10.1016/j.asw.2024.100863","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100863","url":null,"abstract":"<div><p>Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100863"},"PeriodicalIF":3.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141423148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-06-13DOI: 10.1016/j.asw.2024.100867
Sachiko Yasuda
{"title":"Does “more complexity” equal “better writing”? Investigating the relationship between form-based complexity and meaning-based complexity in high school EFL learners’ argumentative writing","authors":"Sachiko Yasuda","doi":"10.1016/j.asw.2024.100867","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100867","url":null,"abstract":"<div><p>The study examines the relationship between form-based complexity and meaning-based complexity in argumentative essays written by high school students learning English as a foreign language (EFL) in relation to writing quality. The data comprise argumentative essays written by 102 Japanese high school learners at different proficiency levels. The students’ proficiency levels were determined based on the evaluation of their argumentative essays by human raters using the GTEC rubric. The students’ essays were analyzed from multiple dimensions, focusing on both form-based complexity (lexical complexity, large-grained syntactic complexity, and fine-grained syntactic complexity features) and meaning-based complexity (argument quality). The results of the multidimensional analysis revealed that the most influential factor in determining overall essay scores was not form-based complexity but meaning-based complexity achieved through argument quality. Moreover, the results indicated that meaning-based complexity was strongly correlated with the use of complex nominals rather than clausal complexity. These insights have significant implications for both the teaching and assessment of argumentative essays among high school EFL learners, underscoring the importance of understanding what aspects of writing to prioritize and how best to assess student writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100867"},"PeriodicalIF":3.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-06-07DOI: 10.1016/j.asw.2024.100862
Jihua Dong , Yanan Zhao , Louisa Buckingham
{"title":"Thirty years of writing assessment: A bibliometric analysis of research trends and future directions","authors":"Jihua Dong , Yanan Zhao , Louisa Buckingham","doi":"10.1016/j.asw.2024.100862","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100862","url":null,"abstract":"<div><p>This study employs a bibliometric analysis to identify the research trends in the field of writing assessment over the last 30 years (1993–2022). Employing a dataset of 1,712 articles and 52,092 unique references, keyword co-occurrence analyses were used to identify prominent research topics, co-citation analyses were conducted to identify influential publications and journals, and a structural variation analysis was employed to identify transformative research in recent years. The results revealed the growing popularity of the writing assessment field, and the increasing diversity of research topics in the field. The research trends have become more associated with technology and cognitive and metacognitive processes. The influential publications indicate changes in research interest towards cross-disciplinary publications. The journals identified as key venues for writing assessment research also changed across the three decades. The latest transformative research points out possible future directions, including the integration of computational methods in writing assessment, and investigations into relationships between writing quality and various factors. This study contributes to our understanding of the development and future directions of writing assessment research, and has implications for researchers and practitioners.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100862"},"PeriodicalIF":3.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-05-31DOI: 10.1016/j.asw.2024.100864
Kai Guo
{"title":"EvaluMate: Using AI to support students’ feedback provision in peer assessment for writing","authors":"Kai Guo","doi":"10.1016/j.asw.2024.100864","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100864","url":null,"abstract":"<div><p>Peer feedback plays an important role in promoting learning in the writing classroom. However, providing high-quality feedback can be demanding for student reviewers. To address this challenge, this article proposes an AI-enhanced approach to peer feedback provision. I introduce EvaluMate, a newly developed online peer review system that leverages ChatGPT, a large language model (LLM), to scaffold student reviewers’ feedback generation. I discuss the design and functionality of EvaluMate, highlighting its affordances in supporting student reviewers’ provision of comments on peers’ essays. I also address the system’s limitations and propose potential solutions. Furthermore, I recommend future research on students’ engagement with this learning approach and its impact on learning outcomes. By presenting EvaluMate, I aim to inspire researchers and practitioners to explore the potential of AI technology in the teaching, learning, and assessment of writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100864"},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-05-31DOI: 10.1016/j.asw.2024.100849
Xiaozhu Wang, Jimin Wang
{"title":"Comparing Chinese L2 writing performance in paper-based and computer-based modes: Perspectives from the writing product and process","authors":"Xiaozhu Wang, Jimin Wang","doi":"10.1016/j.asw.2024.100849","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100849","url":null,"abstract":"<div><p>As writing is a complex language-producing process dependent on the writing environment and medium, the comparability of computer-based (CB) and paper-based (PB) writing assessments has been studied extensively since the emergence of computer-based language writing assessment. This study investigated the differences in the writing product and process between CB and PB modes of writing assessment in Chinese as a second language, of which the character writing system is considered challenging for learners. The many-facet Rasch model (MFRM) was adopted to reveal the text quality differences. Keystrokes and handwriting trace data were utilized to unveil insights into the writing process. The results showed that Chinese L2 learners generated higher-quality texts with fewer character mistakes in the CB mode. They revised much more, paused shorter and less frequently between lower-level linguistic units in the CB mode. The quality of CB text is associated with revision behavior, whereas pause duration serves as a stronger predictor of PB text quality. The findings suggest that the act of handwriting Chinese characters makes the construct of PB distinct from the CB writing assessment in L2 Chinese. Thus, the setting of the assessment mode should consider the target language use and the test taker’s characteristics.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100849"},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Assessing WritingPub Date : 2024-05-30DOI: 10.1016/j.asw.2024.100848
Rabail Qayyum
{"title":"A teacher’s inquiry into diagnostic assessment in an EAP writing course","authors":"Rabail Qayyum","doi":"10.1016/j.asw.2024.100848","DOIUrl":"10.1016/j.asw.2024.100848","url":null,"abstract":"<div><p>Research into diagnostic assessment of writing has largely ignored how diagnostic feedback information leads to differentiated instruction and learning. This case study research presents a teacher’s account of validating an in-house diagnostic assessment procedure in an English for Academic Purposes writing course with a view to refining it. I developed a validity argument and gathered and interpreted related evidence, focusing on one student’s performance in and perception of the assessment. The analysis revealed that to an extent the absence of proper feedback mechanisms limited the use of the test, somewhat weakened its impact, and reduced the potential for learning. I propose a modification to the assessment procedure involving a sample student feedback report.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100848"},"PeriodicalIF":3.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}