Examining the Generalizability of Direct Writing Assessment Tasks. CSE Technical Report 718.

Eva Chen, D. Niemi, Jia Wang, Haiwen Wang, J. Mirocha
{"title":"Examining the Generalizability of Direct Writing Assessment Tasks. CSE Technical Report 718.","authors":"Eva Chen, D. Niemi, Jia Wang, Haiwen Wang, J. Mirocha","doi":"10.1037/e643812011-001","DOIUrl":null,"url":null,"abstract":"This study investigated the level of generalizability across a few high quality assessment tasks and the validity of measuring student writing ability using a limited number of essay tasks. More specifically, the research team explored how well writing prompts could measure student general writing ability and if student performance from one writing task could be generalized to other similar writing tasks. A total of four writing prompts were used in the study, with three tasks being literature-based and one task based on a short story. A total of 397 students participated in the study and each student was randomly assigned to complete two of the four tasks. The research team found that three to five essays were required to evaluate and make a reliable judgment of student writing performance. Examining the Generalizability of Direct Writing Assessment Tasks Performance assessment can serve to measure important and complex learning outcomes (Resnick & Resnick, 1989), provide a more direct measurement of student ability (Frederiksen, 1984; Glaser, 1991; Guthrie, 1984), and help guide improvement in instructional practices (Baron, 1991; Bennett, 1993). Of the various types of performance assessment, direct tests of writing ability have experienced the most acceptance in state and national assessment programs (Afflebach, 1985; Applebee, Langer, Jenkins, Mullins & Foertsch, 1990; Applebee, Langer, & Mullis, 1995). Advocates of direct writing assessment point out that students need more exposure to writing in the form of instruction and more frequent examinations (Breland, 1983). However, there are problems associated with using essays to measure students’ writing abilities, like objectivity of ratings, generalizability of scores across raters and tasks (Crehan, 1997). Previous generalizability studies of direct writing assessment","PeriodicalId":19116,"journal":{"name":"National Center for Research on Evaluation, Standards, and Student Testing","volume":"24 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2007-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"National Center for Research on Evaluation, Standards, and Student Testing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1037/e643812011-001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

This study investigated the level of generalizability across a few high quality assessment tasks and the validity of measuring student writing ability using a limited number of essay tasks. More specifically, the research team explored how well writing prompts could measure student general writing ability and if student performance from one writing task could be generalized to other similar writing tasks. A total of four writing prompts were used in the study, with three tasks being literature-based and one task based on a short story. A total of 397 students participated in the study and each student was randomly assigned to complete two of the four tasks. The research team found that three to five essays were required to evaluate and make a reliable judgment of student writing performance. Examining the Generalizability of Direct Writing Assessment Tasks Performance assessment can serve to measure important and complex learning outcomes (Resnick & Resnick, 1989), provide a more direct measurement of student ability (Frederiksen, 1984; Glaser, 1991; Guthrie, 1984), and help guide improvement in instructional practices (Baron, 1991; Bennett, 1993). Of the various types of performance assessment, direct tests of writing ability have experienced the most acceptance in state and national assessment programs (Afflebach, 1985; Applebee, Langer, Jenkins, Mullins & Foertsch, 1990; Applebee, Langer, & Mullis, 1995). Advocates of direct writing assessment point out that students need more exposure to writing in the form of instruction and more frequent examinations (Breland, 1983). However, there are problems associated with using essays to measure students’ writing abilities, like objectivity of ratings, generalizability of scores across raters and tasks (Crehan, 1997). Previous generalizability studies of direct writing assessment
考察直接写作考核任务的普遍性。CSE技术报告
本研究调查了几个高质量的评估任务的普遍性水平,以及使用有限数量的论文任务来衡量学生写作能力的有效性。更具体地说,研究小组探索了写作提示如何很好地衡量学生的总体写作能力,以及学生在一个写作任务中的表现是否可以推广到其他类似的写作任务中。研究中总共使用了四个写作提示,其中三个任务是基于文学的,一个任务是基于一个短篇故事的。共有397名学生参加了这项研究,每个学生被随机分配完成四项任务中的两项。研究小组发现,要评估并对学生的写作表现做出可靠的判断,需要三到五篇文章。绩效评估可以用来衡量重要和复杂的学习成果(Resnick & Resnick, 1989),提供对学生能力的更直接的衡量(Frederiksen, 1984;格拉泽,1991;Guthrie, 1984),并帮助指导教学实践的改进(Baron, 1991;班尼特,1993)。在各种类型的绩效评估中,写作能力的直接测试在州和国家评估计划中得到了最广泛的接受(Afflebach, 1985;Applebee, Langer, Jenkins, Mullins & Foertsch, 1990;Applebee, Langer, & Mullis, 1995)。提倡直接写作评估的人指出,学生需要更多地以教学的形式接触写作,需要更频繁地参加考试(Breland, 1983)。然而,用论文来衡量学生的写作能力存在一些问题,比如评分的客观性,评分者和任务的分数的普遍性(Crehan, 1997)。以往直接写作评价的概括性研究
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信