来源使用特征会影响评分者对论证的判断吗?实验研究

IF 2.2 1区 文学 0 LANGUAGE & LINGUISTICS
Ping-Lin Chuang
{"title":"来源使用特征会影响评分者对论证的判断吗?实验研究","authors":"Ping-Lin Chuang","doi":"10.1177/02655322241263629","DOIUrl":null,"url":null,"abstract":"This experimental study explores how source use features impact raters’ judgment of argumentation in a second language (L2) integrated writing test. One hundred four experienced and novice raters were recruited to complete a rating task that simulated the scoring assignment of a local English Placement Test (EPT). Sixty written responses were adapted from essays written by EPT test-takers. These responses were crafted to reflect different conditions of source use features, namely source use quantity and quality. Rater scores were analyzed using the many-facet Rasch model and mixed two-way analyses of variance (ANOVAs) to examine how they are affected by source use features and rater experience. Results show that source use features impacted the argumentation scores assigned by raters. Paragraphs with more source text ideas that are better incorporated received the highest argumentation scores, and vice versa for those with limited, poorly integrated source information. Rater experience impacted scores but did not influence rater performance meaningfully. The findings of this study connect specific source use features with raters’ evaluation of argumentation, helping to further disentangle the relationships among examinee performance, rater decision, and task features of integrated argumentative writing tests. They also provide meaningful implications for writing assessment research and practices.","PeriodicalId":17928,"journal":{"name":"Language Testing","volume":"177 1","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Do source use features impact raters’ judgment of argumentation? An experimental study\",\"authors\":\"Ping-Lin Chuang\",\"doi\":\"10.1177/02655322241263629\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This experimental study explores how source use features impact raters’ judgment of argumentation in a second language (L2) integrated writing test. One hundred four experienced and novice raters were recruited to complete a rating task that simulated the scoring assignment of a local English Placement Test (EPT). Sixty written responses were adapted from essays written by EPT test-takers. These responses were crafted to reflect different conditions of source use features, namely source use quantity and quality. Rater scores were analyzed using the many-facet Rasch model and mixed two-way analyses of variance (ANOVAs) to examine how they are affected by source use features and rater experience. Results show that source use features impacted the argumentation scores assigned by raters. Paragraphs with more source text ideas that are better incorporated received the highest argumentation scores, and vice versa for those with limited, poorly integrated source information. Rater experience impacted scores but did not influence rater performance meaningfully. The findings of this study connect specific source use features with raters’ evaluation of argumentation, helping to further disentangle the relationships among examinee performance, rater decision, and task features of integrated argumentative writing tests. They also provide meaningful implications for writing assessment research and practices.\",\"PeriodicalId\":17928,\"journal\":{\"name\":\"Language Testing\",\"volume\":\"177 1\",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2024-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language Testing\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1177/02655322241263629\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language Testing","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/02655322241263629","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0

摘要

本实验研究探讨了来源使用特征如何影响评分者对第二语言(L2)综合写作测试中论证的判断。研究人员招募了 14 名经验丰富的评分员和新手来完成一项评分任务,该任务模拟了当地英语分级考试(EPT)的评分任务。六十份书面答卷改编自 EPT 考生所写的文章。这些答卷经过精心制作,以反映来源使用特征的不同情况,即来源使用的数量和质量。我们使用多方面拉施模型和混合双向方差分析(ANOVA)对评分者的得分进行了分析,以研究来源使用特征和评分者经验对评分者得分的影响。结果表明,源文本使用特征影响了评分者的论证评分。源文本观点较多且整合较好的段落论证得分最高,反之,源信息有限且整合不佳的段落论证得分最低。评分者的经验会影响分数,但不会对评分者的表现产生有意义的影响。本研究的发现将具体的来源使用特征与评分者对论证的评价联系起来,有助于进一步厘清考生成绩、评分者决定和综合论证写作测试任务特征之间的关系。这些发现还为写作评估研究和实践提供了有意义的启示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Do source use features impact raters’ judgment of argumentation? An experimental study
This experimental study explores how source use features impact raters’ judgment of argumentation in a second language (L2) integrated writing test. One hundred four experienced and novice raters were recruited to complete a rating task that simulated the scoring assignment of a local English Placement Test (EPT). Sixty written responses were adapted from essays written by EPT test-takers. These responses were crafted to reflect different conditions of source use features, namely source use quantity and quality. Rater scores were analyzed using the many-facet Rasch model and mixed two-way analyses of variance (ANOVAs) to examine how they are affected by source use features and rater experience. Results show that source use features impacted the argumentation scores assigned by raters. Paragraphs with more source text ideas that are better incorporated received the highest argumentation scores, and vice versa for those with limited, poorly integrated source information. Rater experience impacted scores but did not influence rater performance meaningfully. The findings of this study connect specific source use features with raters’ evaluation of argumentation, helping to further disentangle the relationships among examinee performance, rater decision, and task features of integrated argumentative writing tests. They also provide meaningful implications for writing assessment research and practices.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Language Testing
Language Testing Multiple-
CiteScore
6.70
自引率
9.80%
发文量
35
期刊介绍: Language Testing is a fully peer reviewed international journal that publishes original research and review articles on language testing and assessment. It provides a forum for the exchange of ideas and information between people working in the fields of first and second language testing and assessment. This includes researchers and practitioners in EFL and ESL testing, and assessment in child language acquisition and language pathology. In addition, special attention is focused on issues of testing theory, experimental investigations, and the following up of practical implications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信