Michael Matta, Milena A Keller-Margulis, Sterett H Mercer
{"title":"Improving written-expression curriculum-based measurement feasibility with automated writing evaluation programs.","authors":"Michael Matta, Milena A Keller-Margulis, Sterett H Mercer","doi":"10.1037/spq0000691","DOIUrl":null,"url":null,"abstract":"<p><p>Automated writing evaluation programs have emerged as alternative, feasible approaches for scoring student writing. This study evaluated accuracy, predictive validity, diagnostic accuracy, and bias of automated scores of Written-Expression Curriculum-Based Measurement (WE-CBM). A sample of 722 students in Grades 2-5 completed 3-min WE-CBM tasks during one school year. A subset of students also completed the state-mandated writing test the same year or 1 year later. Writing samples were hand-scored for four WE-CBM metrics. A computer-based approach generated automated scores for the same four metrics. Findings indicate simpler automated metrics such as total words written and words spelled correctly, closely matched hand-calculated scores, while small differences were observed for more complex metrics including correct word sequences and correct minus incorrect word sequences. Automated scores for simpler WE-CBM metrics also predicted performance on the state test similarly to hand-calculated scores. Finally, we failed to identify evidence of bias between African American and Hispanic students associated with automated scores. Implications of using automated scores for educational decision making are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":74763,"journal":{"name":"School psychology (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"School psychology (Washington, D.C.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1037/spq0000691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Automated writing evaluation programs have emerged as alternative, feasible approaches for scoring student writing. This study evaluated accuracy, predictive validity, diagnostic accuracy, and bias of automated scores of Written-Expression Curriculum-Based Measurement (WE-CBM). A sample of 722 students in Grades 2-5 completed 3-min WE-CBM tasks during one school year. A subset of students also completed the state-mandated writing test the same year or 1 year later. Writing samples were hand-scored for four WE-CBM metrics. A computer-based approach generated automated scores for the same four metrics. Findings indicate simpler automated metrics such as total words written and words spelled correctly, closely matched hand-calculated scores, while small differences were observed for more complex metrics including correct word sequences and correct minus incorrect word sequences. Automated scores for simpler WE-CBM metrics also predicted performance on the state test similarly to hand-calculated scores. Finally, we failed to identify evidence of bias between African American and Hispanic students associated with automated scores. Implications of using automated scores for educational decision making are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).