{"title":"评估来源使用:摘要与阅读写作议论文","authors":"Qin Xie","doi":"10.1016/j.asw.2023.100755","DOIUrl":null,"url":null,"abstract":"<div><p>What is involved in source use and how to assess it have been key concerns of research on L2 integrated writing assessment. However, raters’ ability to reliably assess the construct remains scarcely investigated, as do the relations among different types of integrated writing tasks. To partially address this gap, the present study had a sizeable sample (N = 204) of undergraduates from three Hong Kong universities write a summary and an integrated reading-to-write argumentative essay task in a test-like condition. Then, focusing on the criteria of source use, it analysed raters’ application of analytical rubrics in assessing the writing outputs. Rater variability and scale structures were examined through the Multi-Facet Rasch Measurement and compared across the two writing tasks. Both similarities and differences were found. In the summary task, the criteria for source use were applied similarly to the criteria for language use and discourse features. In the essay task, however, the application of the source use criteria was much less consistent. Diagnostic statistics indicate that fewer levels on the scale would be more advisable. For both tasks, the criterion of <em>source language use</em> was found not to fit the overall model nor to align with the criteria for source ideas or language use, indicating that this criterion may represent a trait different from the other. The statistical relations between source use and the other subconstructs of integrated writing tasks are also reported herein. Implications are discussed in the interest of refining the assessment of the source use construct in the future.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"57 ","pages":"Article 100755"},"PeriodicalIF":4.2000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing source use: Summary vs. reading-to-write argumentative essay\",\"authors\":\"Qin Xie\",\"doi\":\"10.1016/j.asw.2023.100755\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>What is involved in source use and how to assess it have been key concerns of research on L2 integrated writing assessment. However, raters’ ability to reliably assess the construct remains scarcely investigated, as do the relations among different types of integrated writing tasks. To partially address this gap, the present study had a sizeable sample (N = 204) of undergraduates from three Hong Kong universities write a summary and an integrated reading-to-write argumentative essay task in a test-like condition. Then, focusing on the criteria of source use, it analysed raters’ application of analytical rubrics in assessing the writing outputs. Rater variability and scale structures were examined through the Multi-Facet Rasch Measurement and compared across the two writing tasks. Both similarities and differences were found. In the summary task, the criteria for source use were applied similarly to the criteria for language use and discourse features. In the essay task, however, the application of the source use criteria was much less consistent. Diagnostic statistics indicate that fewer levels on the scale would be more advisable. For both tasks, the criterion of <em>source language use</em> was found not to fit the overall model nor to align with the criteria for source ideas or language use, indicating that this criterion may represent a trait different from the other. The statistical relations between source use and the other subconstructs of integrated writing tasks are also reported herein. Implications are discussed in the interest of refining the assessment of the source use construct in the future.</p></div>\",\"PeriodicalId\":46865,\"journal\":{\"name\":\"Assessing Writing\",\"volume\":\"57 \",\"pages\":\"Article 100755\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Assessing Writing\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1075293523000636\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessing Writing","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1075293523000636","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Assessing source use: Summary vs. reading-to-write argumentative essay
What is involved in source use and how to assess it have been key concerns of research on L2 integrated writing assessment. However, raters’ ability to reliably assess the construct remains scarcely investigated, as do the relations among different types of integrated writing tasks. To partially address this gap, the present study had a sizeable sample (N = 204) of undergraduates from three Hong Kong universities write a summary and an integrated reading-to-write argumentative essay task in a test-like condition. Then, focusing on the criteria of source use, it analysed raters’ application of analytical rubrics in assessing the writing outputs. Rater variability and scale structures were examined through the Multi-Facet Rasch Measurement and compared across the two writing tasks. Both similarities and differences were found. In the summary task, the criteria for source use were applied similarly to the criteria for language use and discourse features. In the essay task, however, the application of the source use criteria was much less consistent. Diagnostic statistics indicate that fewer levels on the scale would be more advisable. For both tasks, the criterion of source language use was found not to fit the overall model nor to align with the criteria for source ideas or language use, indicating that this criterion may represent a trait different from the other. The statistical relations between source use and the other subconstructs of integrated writing tasks are also reported herein. Implications are discussed in the interest of refining the assessment of the source use construct in the future.
期刊介绍:
Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional (direct and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development.