{"title":"评估来源使用:摘要与阅读写作议论文","authors":"Qin Xie","doi":"10.1016/j.asw.2023.100755","DOIUrl":null,"url":null,"abstract":"<div><p>What is involved in source use and how to assess it have been key concerns of research on L2 integrated writing assessment. However, raters’ ability to reliably assess the construct remains scarcely investigated, as do the relations among different types of integrated writing tasks. To partially address this gap, the present study had a sizeable sample (N = 204) of undergraduates from three Hong Kong universities write a summary and an integrated reading-to-write argumentative essay task in a test-like condition. Then, focusing on the criteria of source use, it analysed raters’ application of analytical rubrics in assessing the writing outputs. Rater variability and scale structures were examined through the Multi-Facet Rasch Measurement and compared across the two writing tasks. Both similarities and differences were found. In the summary task, the criteria for source use were applied similarly to the criteria for language use and discourse features. In the essay task, however, the application of the source use criteria was much less consistent. Diagnostic statistics indicate that fewer levels on the scale would be more advisable. For both tasks, the criterion of <em>source language use</em> was found not to fit the overall model nor to align with the criteria for source ideas or language use, indicating that this criterion may represent a trait different from the other. The statistical relations between source use and the other subconstructs of integrated writing tasks are also reported herein. Implications are discussed in the interest of refining the assessment of the source use construct in the future.</p></div>","PeriodicalId":4,"journal":{"name":"ACS Applied Energy Materials","volume":null,"pages":null},"PeriodicalIF":5.4000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing source use: Summary vs. reading-to-write argumentative essay\",\"authors\":\"Qin Xie\",\"doi\":\"10.1016/j.asw.2023.100755\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>What is involved in source use and how to assess it have been key concerns of research on L2 integrated writing assessment. However, raters’ ability to reliably assess the construct remains scarcely investigated, as do the relations among different types of integrated writing tasks. To partially address this gap, the present study had a sizeable sample (N = 204) of undergraduates from three Hong Kong universities write a summary and an integrated reading-to-write argumentative essay task in a test-like condition. Then, focusing on the criteria of source use, it analysed raters’ application of analytical rubrics in assessing the writing outputs. Rater variability and scale structures were examined through the Multi-Facet Rasch Measurement and compared across the two writing tasks. Both similarities and differences were found. In the summary task, the criteria for source use were applied similarly to the criteria for language use and discourse features. In the essay task, however, the application of the source use criteria was much less consistent. Diagnostic statistics indicate that fewer levels on the scale would be more advisable. For both tasks, the criterion of <em>source language use</em> was found not to fit the overall model nor to align with the criteria for source ideas or language use, indicating that this criterion may represent a trait different from the other. The statistical relations between source use and the other subconstructs of integrated writing tasks are also reported herein. Implications are discussed in the interest of refining the assessment of the source use construct in the future.</p></div>\",\"PeriodicalId\":4,\"journal\":{\"name\":\"ACS Applied Energy Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Energy Materials\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1075293523000636\",\"RegionNum\":3,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"CHEMISTRY, PHYSICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Energy Materials","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1075293523000636","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CHEMISTRY, PHYSICAL","Score":null,"Total":0}
Assessing source use: Summary vs. reading-to-write argumentative essay
What is involved in source use and how to assess it have been key concerns of research on L2 integrated writing assessment. However, raters’ ability to reliably assess the construct remains scarcely investigated, as do the relations among different types of integrated writing tasks. To partially address this gap, the present study had a sizeable sample (N = 204) of undergraduates from three Hong Kong universities write a summary and an integrated reading-to-write argumentative essay task in a test-like condition. Then, focusing on the criteria of source use, it analysed raters’ application of analytical rubrics in assessing the writing outputs. Rater variability and scale structures were examined through the Multi-Facet Rasch Measurement and compared across the two writing tasks. Both similarities and differences were found. In the summary task, the criteria for source use were applied similarly to the criteria for language use and discourse features. In the essay task, however, the application of the source use criteria was much less consistent. Diagnostic statistics indicate that fewer levels on the scale would be more advisable. For both tasks, the criterion of source language use was found not to fit the overall model nor to align with the criteria for source ideas or language use, indicating that this criterion may represent a trait different from the other. The statistical relations between source use and the other subconstructs of integrated writing tasks are also reported herein. Implications are discussed in the interest of refining the assessment of the source use construct in the future.
期刊介绍:
ACS Applied Energy Materials is an interdisciplinary journal publishing original research covering all aspects of materials, engineering, chemistry, physics and biology relevant to energy conversion and storage. The journal is devoted to reports of new and original experimental and theoretical research of an applied nature that integrate knowledge in the areas of materials, engineering, physics, bioscience, and chemistry into important energy applications.