O. Kvasova, Lyudmyla Hnapovska, V. Kalinichenko, Luliia Budas
{"title":"培训大学教师编制地方写作评定量表的意义","authors":"O. Kvasova, Lyudmyla Hnapovska, V. Kalinichenko, Luliia Budas","doi":"10.58379/xvdf9070","DOIUrl":null,"url":null,"abstract":"Language assessment literacy is currently in search of new, modern conceptualisations in which contextual factors have a growing significance and impact (Tsagari, 2020). This article presents an initiative to promote writing assessment literacy in a culture-specific educational context. Assessment of writing belongs to the under-researched areas in Ukrainian higher education, wherein teachers have to act as raters and as rating scale developers without being properly trained in language assessment. The gaps in writing assessment literacy prompted research into the strengths and weaknesses of using a local rating scale developed by university teachers. It was conducted within an Erasmus + Staff mobility project in 2016-2019 and followed up by dissemination events held in several universities in Ukraine. The сurrent study aims to explore the impact of training in writing assessment on the processes and outcomes of university teachers’ development and use of analytic rating scales. The paper analyses how three teams of teachers from different universities coped with the task, and whether the training they underwent enabled them to design well-performing rating scales. The nine participants in the study developed three local context-specific analytic rating scales following the intuitive method of scale design, detailed in the guidelines prepared by the trainer. Given the same context (ESP) and the same CEFR level (B1 ->B2), we managed to compare the three local rating scales. The study testifies to a positive impact of the training on teachers’ literacy in writing assessment.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.1000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Implications of training university teachers in developing local writing rating scales\",\"authors\":\"O. Kvasova, Lyudmyla Hnapovska, V. Kalinichenko, Luliia Budas\",\"doi\":\"10.58379/xvdf9070\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Language assessment literacy is currently in search of new, modern conceptualisations in which contextual factors have a growing significance and impact (Tsagari, 2020). This article presents an initiative to promote writing assessment literacy in a culture-specific educational context. Assessment of writing belongs to the under-researched areas in Ukrainian higher education, wherein teachers have to act as raters and as rating scale developers without being properly trained in language assessment. The gaps in writing assessment literacy prompted research into the strengths and weaknesses of using a local rating scale developed by university teachers. It was conducted within an Erasmus + Staff mobility project in 2016-2019 and followed up by dissemination events held in several universities in Ukraine. The сurrent study aims to explore the impact of training in writing assessment on the processes and outcomes of university teachers’ development and use of analytic rating scales. The paper analyses how three teams of teachers from different universities coped with the task, and whether the training they underwent enabled them to design well-performing rating scales. The nine participants in the study developed three local context-specific analytic rating scales following the intuitive method of scale design, detailed in the guidelines prepared by the trainer. Given the same context (ESP) and the same CEFR level (B1 ->B2), we managed to compare the three local rating scales. The study testifies to a positive impact of the training on teachers’ literacy in writing assessment.\",\"PeriodicalId\":29650,\"journal\":{\"name\":\"Studies in Language Assessment\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.1000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in Language Assessment\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.58379/xvdf9070\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in Language Assessment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58379/xvdf9070","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"LINGUISTICS","Score":null,"Total":0}
Implications of training university teachers in developing local writing rating scales
Language assessment literacy is currently in search of new, modern conceptualisations in which contextual factors have a growing significance and impact (Tsagari, 2020). This article presents an initiative to promote writing assessment literacy in a culture-specific educational context. Assessment of writing belongs to the under-researched areas in Ukrainian higher education, wherein teachers have to act as raters and as rating scale developers without being properly trained in language assessment. The gaps in writing assessment literacy prompted research into the strengths and weaknesses of using a local rating scale developed by university teachers. It was conducted within an Erasmus + Staff mobility project in 2016-2019 and followed up by dissemination events held in several universities in Ukraine. The сurrent study aims to explore the impact of training in writing assessment on the processes and outcomes of university teachers’ development and use of analytic rating scales. The paper analyses how three teams of teachers from different universities coped with the task, and whether the training they underwent enabled them to design well-performing rating scales. The nine participants in the study developed three local context-specific analytic rating scales following the intuitive method of scale design, detailed in the guidelines prepared by the trainer. Given the same context (ESP) and the same CEFR level (B1 ->B2), we managed to compare the three local rating scales. The study testifies to a positive impact of the training on teachers’ literacy in writing assessment.