{"title":"评分者如何理解评估第二语言互动参与的准则?CA和非CA配制性能描述符的比较研究","authors":"Erica Sandlund, T. Greer","doi":"10.58379/jciw3943","DOIUrl":null,"url":null,"abstract":"While paired student discussion tests in EFL contexts are often graded using rubrics with broad descriptors, an alternative approach constructs the rubric via extensive written descriptions of video-recorded exemplary cases at each performance level. With its long history of deeply descriptive observation of interaction, Conversation Analysis (CA) is one apt tool for constructing such exemplar-based rubrics; but to what extent are non-CA specialist teacher-raters able to interpret a CA analysis in order to assess the test? This study explores this issue by comparing two paired EFL discussion tests that use exemplar-based rubrics, one written by a CA specialist and the other by EFL test constructors not specialized in CA. The complete dataset consists of test recordings (university-level Japanese learners of English, and secondary-level Swedish learners of English) and recordings of teacher-raters’ interaction. Our analysis focuses on ways experienced language educators perceive engagement while discussing their ratings of the video-recorded test talk in relation to the exemplars and descriptive rubrics. The study highlights differences in the way teacher-raters display their understanding of the notion of engagement within the tests, and demonstrates how CA rubrics can facilitate a more emically grounded assessment.","PeriodicalId":29650,"journal":{"name":"Studies in Language Assessment","volume":null,"pages":null},"PeriodicalIF":0.1000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"How do raters understand rubrics for assessing L2 interactional engagement? A comparative study of CA- and non-CA-formulated performance descriptors\",\"authors\":\"Erica Sandlund, T. Greer\",\"doi\":\"10.58379/jciw3943\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While paired student discussion tests in EFL contexts are often graded using rubrics with broad descriptors, an alternative approach constructs the rubric via extensive written descriptions of video-recorded exemplary cases at each performance level. With its long history of deeply descriptive observation of interaction, Conversation Analysis (CA) is one apt tool for constructing such exemplar-based rubrics; but to what extent are non-CA specialist teacher-raters able to interpret a CA analysis in order to assess the test? This study explores this issue by comparing two paired EFL discussion tests that use exemplar-based rubrics, one written by a CA specialist and the other by EFL test constructors not specialized in CA. The complete dataset consists of test recordings (university-level Japanese learners of English, and secondary-level Swedish learners of English) and recordings of teacher-raters’ interaction. Our analysis focuses on ways experienced language educators perceive engagement while discussing their ratings of the video-recorded test talk in relation to the exemplars and descriptive rubrics. The study highlights differences in the way teacher-raters display their understanding of the notion of engagement within the tests, and demonstrates how CA rubrics can facilitate a more emically grounded assessment.\",\"PeriodicalId\":29650,\"journal\":{\"name\":\"Studies in Language Assessment\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.1000,\"publicationDate\":\"2020-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in Language Assessment\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.58379/jciw3943\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in Language Assessment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.58379/jciw3943","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"LINGUISTICS","Score":null,"Total":0}
How do raters understand rubrics for assessing L2 interactional engagement? A comparative study of CA- and non-CA-formulated performance descriptors
While paired student discussion tests in EFL contexts are often graded using rubrics with broad descriptors, an alternative approach constructs the rubric via extensive written descriptions of video-recorded exemplary cases at each performance level. With its long history of deeply descriptive observation of interaction, Conversation Analysis (CA) is one apt tool for constructing such exemplar-based rubrics; but to what extent are non-CA specialist teacher-raters able to interpret a CA analysis in order to assess the test? This study explores this issue by comparing two paired EFL discussion tests that use exemplar-based rubrics, one written by a CA specialist and the other by EFL test constructors not specialized in CA. The complete dataset consists of test recordings (university-level Japanese learners of English, and secondary-level Swedish learners of English) and recordings of teacher-raters’ interaction. Our analysis focuses on ways experienced language educators perceive engagement while discussing their ratings of the video-recorded test talk in relation to the exemplars and descriptive rubrics. The study highlights differences in the way teacher-raters display their understanding of the notion of engagement within the tests, and demonstrates how CA rubrics can facilitate a more emically grounded assessment.