通过训练有素的大学生验证 "从阅读到写作 "综合量表

IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Claudia Harsch , Valeriia Koval , Paraskevi (Voula) Kanistra , Ximena Delgado-Osorio
{"title":"通过训练有素的大学生验证 \"从阅读到写作 \"综合量表","authors":"Claudia Harsch ,&nbsp;Valeriia Koval ,&nbsp;Paraskevi (Voula) Kanistra ,&nbsp;Ximena Delgado-Osorio","doi":"10.1016/j.asw.2024.100894","DOIUrl":null,"url":null,"abstract":"<div><div>Integrated tasks are often used in higher education (HE) for diagnostic purposes, with increasing popularity in lingua franca contexts, such as German HE, where English-medium courses are gaining ground. In this context, we report the validation of a new rating scale for assessing reading-into-writing tasks. To examine scoring validity, we employed Weir’s (2005) socio-cognitive framework in an explanatory mixed-methods design. We collected 679 integrated performances in four summary and opinion tasks, which were rated by six trained student raters. They are to become writing tutors for first-year students. We utilized a many-facet Rasch model to investigate rater severity, reliability, consistency, and scale functioning. Using thematic analysis, we analyzed think-aloud protocols, retrospective and focus group interviews with the raters. Findings showed that the rating scale overall functions as intended and is perceived by the raters as valid operationalization of the integrated construct. FACETS analyses revealed reasonable reliabilities, yet exposed local issues with certain criteria and band levels. This is corroborated by the challenges reported by the raters, which they mainly attributed to the complexities inherent in such an assessment. Applying Weir’s (2005) framework in a mixed-methods approach facilitated the interpretation of the quantitative findings and yielded insights into potential validity threads.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100894"},"PeriodicalIF":4.2000,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000874/pdfft?md5=73c505eab3803fbf3a3edfd0612d454a&pid=1-s2.0-S1075293524000874-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Validating an integrated reading-into-writing scale with trained university students\",\"authors\":\"Claudia Harsch ,&nbsp;Valeriia Koval ,&nbsp;Paraskevi (Voula) Kanistra ,&nbsp;Ximena Delgado-Osorio\",\"doi\":\"10.1016/j.asw.2024.100894\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Integrated tasks are often used in higher education (HE) for diagnostic purposes, with increasing popularity in lingua franca contexts, such as German HE, where English-medium courses are gaining ground. In this context, we report the validation of a new rating scale for assessing reading-into-writing tasks. To examine scoring validity, we employed Weir’s (2005) socio-cognitive framework in an explanatory mixed-methods design. We collected 679 integrated performances in four summary and opinion tasks, which were rated by six trained student raters. They are to become writing tutors for first-year students. We utilized a many-facet Rasch model to investigate rater severity, reliability, consistency, and scale functioning. Using thematic analysis, we analyzed think-aloud protocols, retrospective and focus group interviews with the raters. Findings showed that the rating scale overall functions as intended and is perceived by the raters as valid operationalization of the integrated construct. FACETS analyses revealed reasonable reliabilities, yet exposed local issues with certain criteria and band levels. This is corroborated by the challenges reported by the raters, which they mainly attributed to the complexities inherent in such an assessment. Applying Weir’s (2005) framework in a mixed-methods approach facilitated the interpretation of the quantitative findings and yielded insights into potential validity threads.</div></div>\",\"PeriodicalId\":46865,\"journal\":{\"name\":\"Assessing Writing\",\"volume\":\"62 \",\"pages\":\"Article 100894\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S1075293524000874/pdfft?md5=73c505eab3803fbf3a3edfd0612d454a&pid=1-s2.0-S1075293524000874-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Assessing Writing\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1075293524000874\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessing Writing","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1075293524000874","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

在高等教育(HE)中,综合任务经常被用于诊断目的,在通用语言环境中越来越受欢迎,例如在德国高等教育中,以英语为教学语言的课程越来越受欢迎。在这种情况下,我们报告了用于评估 "读写结合 "任务的新评分量表的验证情况。为了检验评分的有效性,我们在解释性混合方法设计中采用了 Weir(2005 年)的社会认知框架。我们收集了 679 个学生在四项摘要和观点任务中的综合表现,并由六名经过培训的学生评分员进行评分。他们将成为一年级学生的写作导师。我们利用多方面的 Rasch 模型来研究评分者的严重程度、可靠性、一致性和量表功能。通过主题分析,我们分析了评分者的思考-朗读协议、回顾性访谈和焦点小组访谈。研究结果表明,评分量表的整体功能符合预期,评分者认为它是对综合概念的有效操作。FACETS 分析显示了合理的可靠性,但也暴露了某些标准和等级的局部问题。评定者所报告的挑战也证实了这一点,他们主要将这些挑战归因于这种评定所固有的复杂性。在混合方法中应用韦尔(Weir,2005 年)的框架有助于解释定量研究结果,并能深入了解潜在的有效性问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Validating an integrated reading-into-writing scale with trained university students
Integrated tasks are often used in higher education (HE) for diagnostic purposes, with increasing popularity in lingua franca contexts, such as German HE, where English-medium courses are gaining ground. In this context, we report the validation of a new rating scale for assessing reading-into-writing tasks. To examine scoring validity, we employed Weir’s (2005) socio-cognitive framework in an explanatory mixed-methods design. We collected 679 integrated performances in four summary and opinion tasks, which were rated by six trained student raters. They are to become writing tutors for first-year students. We utilized a many-facet Rasch model to investigate rater severity, reliability, consistency, and scale functioning. Using thematic analysis, we analyzed think-aloud protocols, retrospective and focus group interviews with the raters. Findings showed that the rating scale overall functions as intended and is perceived by the raters as valid operationalization of the integrated construct. FACETS analyses revealed reasonable reliabilities, yet exposed local issues with certain criteria and band levels. This is corroborated by the challenges reported by the raters, which they mainly attributed to the complexities inherent in such an assessment. Applying Weir’s (2005) framework in a mixed-methods approach facilitated the interpretation of the quantitative findings and yielded insights into potential validity threads.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Assessing Writing
Assessing Writing Multiple-
CiteScore
6.00
自引率
17.90%
发文量
67
期刊介绍: Assessing Writing is a refereed international journal providing a forum for ideas, research and practice on the assessment of written language. Assessing Writing publishes articles, book reviews, conference reports, and academic exchanges concerning writing assessments of all kinds, including traditional (direct and standardised forms of) testing of writing, alternative performance assessments (such as portfolios), workplace sampling and classroom assessment. The journal focuses on all stages of the writing assessment process, including needs evaluation, assessment creation, implementation, and validation, and test development.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信