{"title":"为综合阅读写作和听力写作总结任务制定更有效的评分标准","authors":"Sathena Chan, Lyn May","doi":"10.1177/02655322221135025","DOIUrl":null,"url":null,"abstract":"Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that discriminate summaries produced by 150 candidates at five levels of proficiency on integrated reading-writing (R-W) and listening-writing (L-W) tasks. The expert judgement revealed a wide range of features which discriminated R-W and L-W responses. When responses at five proficiency levels were coded by these features, significant differences were obtained in seven features, including relevance of ideas, paraphrasing skills, accuracy of source information, academic style, language control, coherence and cohesion, and task fulfilment across proficiency levels on the R-W task. The same features did not yield significant differences in L-W responses across proficiency levels. The findings have important implications for clarifying the construct of integrated summary writing in different modalities, indicating the possibility of expanding integrated rating categories with some potential for translating the identified criteria into automated rating systems. The results on the L-W indicate the need for developing descriptors which can more effectively discriminate L-W responses.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards more valid scoring criteria for integrated reading-writing and listening-writing summary tasks\",\"authors\":\"Sathena Chan, Lyn May\",\"doi\":\"10.1177/02655322221135025\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that discriminate summaries produced by 150 candidates at five levels of proficiency on integrated reading-writing (R-W) and listening-writing (L-W) tasks. The expert judgement revealed a wide range of features which discriminated R-W and L-W responses. When responses at five proficiency levels were coded by these features, significant differences were obtained in seven features, including relevance of ideas, paraphrasing skills, accuracy of source information, academic style, language control, coherence and cohesion, and task fulfilment across proficiency levels on the R-W task. The same features did not yield significant differences in L-W responses across proficiency levels. The findings have important implications for clarifying the construct of integrated summary writing in different modalities, indicating the possibility of expanding integrated rating categories with some potential for translating the identified criteria into automated rating systems. The results on the L-W indicate the need for developing descriptors which can more effectively discriminate L-W responses.\",\"PeriodicalId\":17928,\"journal\":{\"name\":\"Language Testing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2022-12-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language Testing\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1177/02655322221135025\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language Testing","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/02655322221135025","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
Towards more valid scoring criteria for integrated reading-writing and listening-writing summary tasks
Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that discriminate summaries produced by 150 candidates at five levels of proficiency on integrated reading-writing (R-W) and listening-writing (L-W) tasks. The expert judgement revealed a wide range of features which discriminated R-W and L-W responses. When responses at five proficiency levels were coded by these features, significant differences were obtained in seven features, including relevance of ideas, paraphrasing skills, accuracy of source information, academic style, language control, coherence and cohesion, and task fulfilment across proficiency levels on the R-W task. The same features did not yield significant differences in L-W responses across proficiency levels. The findings have important implications for clarifying the construct of integrated summary writing in different modalities, indicating the possibility of expanding integrated rating categories with some potential for translating the identified criteria into automated rating systems. The results on the L-W indicate the need for developing descriptors which can more effectively discriminate L-W responses.
期刊介绍:
Language Testing is a fully peer reviewed international journal that publishes original research and review articles on language testing and assessment. It provides a forum for the exchange of ideas and information between people working in the fields of first and second language testing and assessment. This includes researchers and practitioners in EFL and ESL testing, and assessment in child language acquisition and language pathology. In addition, special attention is focused on issues of testing theory, experimental investigations, and the following up of practical implications.