Improving written-expression curriculum-based measurement feasibility with automated writing evaluation programs.

Michael Matta, Milena A Keller-Margulis, Sterett H Mercer
{"title":"Improving written-expression curriculum-based measurement feasibility with automated writing evaluation programs.","authors":"Michael Matta, Milena A Keller-Margulis, Sterett H Mercer","doi":"10.1037/spq0000691","DOIUrl":null,"url":null,"abstract":"<p><p>Automated writing evaluation programs have emerged as alternative, feasible approaches for scoring student writing. This study evaluated accuracy, predictive validity, diagnostic accuracy, and bias of automated scores of Written-Expression Curriculum-Based Measurement (WE-CBM). A sample of 722 students in Grades 2-5 completed 3-min WE-CBM tasks during one school year. A subset of students also completed the state-mandated writing test the same year or 1 year later. Writing samples were hand-scored for four WE-CBM metrics. A computer-based approach generated automated scores for the same four metrics. Findings indicate simpler automated metrics such as total words written and words spelled correctly, closely matched hand-calculated scores, while small differences were observed for more complex metrics including correct word sequences and correct minus incorrect word sequences. Automated scores for simpler WE-CBM metrics also predicted performance on the state test similarly to hand-calculated scores. Finally, we failed to identify evidence of bias between African American and Hispanic students associated with automated scores. Implications of using automated scores for educational decision making are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":74763,"journal":{"name":"School psychology (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"School psychology (Washington, D.C.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1037/spq0000691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Automated writing evaluation programs have emerged as alternative, feasible approaches for scoring student writing. This study evaluated accuracy, predictive validity, diagnostic accuracy, and bias of automated scores of Written-Expression Curriculum-Based Measurement (WE-CBM). A sample of 722 students in Grades 2-5 completed 3-min WE-CBM tasks during one school year. A subset of students also completed the state-mandated writing test the same year or 1 year later. Writing samples were hand-scored for four WE-CBM metrics. A computer-based approach generated automated scores for the same four metrics. Findings indicate simpler automated metrics such as total words written and words spelled correctly, closely matched hand-calculated scores, while small differences were observed for more complex metrics including correct word sequences and correct minus incorrect word sequences. Automated scores for simpler WE-CBM metrics also predicted performance on the state test similarly to hand-calculated scores. Finally, we failed to identify evidence of bias between African American and Hispanic students associated with automated scores. Implications of using automated scores for educational decision making are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

利用自动写作评估程序改进基于书面表达课程的测量可行性。
自动写作评估程序已经成为为学生写作评分的一种可行的替代方法。本研究评估了基于课程的书面表达测试(WE-CBM)自动评分的准确性、预测效度、诊断准确性和偏差。722名2-5年级的学生在一个学年完成了3分钟的WE-CBM任务。一部分学生也在同一年或一年后完成了国家规定的写作测试。写作样本是为四个WE-CBM指标手工评分的。基于计算机的方法为同样的四个指标自动生成分数。研究结果表明,简单的自动化指标(如书写的单词总数和拼写正确的单词)与人工计算的分数非常接近,而更复杂的指标(包括正确的单词序列和正确减去错误的单词序列)的差异很小。简单的WE-CBM指标的自动分数也可以预测状态测试中的表现,类似于手工计算的分数。最后,我们没有发现非裔美国人和西班牙裔学生之间存在与自动评分相关的偏见的证据。讨论了在教育决策中使用自动评分的含义。(PsycInfo Database Record (c) 2025 APA,版权所有)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信