Modeling Writing Traits in a Formative Essay Corpus

Q3 Social Sciences
Paul Deane, Duanli Yan, Katherine Castellano, Y. Attali, Michelle Lamar, Mo Zhang, Ian Blood, James V. Bruno, Chen Li, Wenju Cui, Chunyi Ruan, Colleen Appel, Kofi James, Rodolfo Long, Farah Qureshi
{"title":"Modeling Writing Traits in a Formative Essay Corpus","authors":"Paul Deane, Duanli Yan, Katherine Castellano, Y. Attali, Michelle Lamar, Mo Zhang, Ian Blood, James V. Bruno, Chen Li, Wenju Cui, Chunyi Ruan, Colleen Appel, Kofi James, Rodolfo Long, Farah Qureshi","doi":"10.1002/ets2.12377","DOIUrl":null,"url":null,"abstract":"This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a reasonable fit for essay data from 4th grade to college. It includes an analysis of the test‐retest reliability of each trait, longitudinal trends by trait, both within the school year and from 4th to 12th grades, and analysis of genre differences by trait, using prompts from the Criterion topic library aligned with the major modes of writing (exposition, argumentation, narrative, description, process, comparison and contrast, and cause and effect). It demonstrates that many of the traits are about as reliable as overall e‐rater® scores, that the trait model can be used to build models somewhat more closely aligned with human scores than standard e‐rater models, and that there are large, significant trait differences by genre, consistent with genre differences in trait patterns described in the larger literature. Some of the traits demonstrated clear trends between successive revisions. Students using Criterion appear to have consistently improved grammar, usage, and spelling after getting Criterion feedback and to have marginally improved essay organization. Many of the traits also demonstrated clear grade level trends. These features indicate that the trait model could be used to support more detailed scoring and reporting for writing assessments and learning tools.","PeriodicalId":11972,"journal":{"name":"ETS Research Report Series","volume":" 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ETS Research Report Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/ets2.12377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a reasonable fit for essay data from 4th grade to college. It includes an analysis of the test‐retest reliability of each trait, longitudinal trends by trait, both within the school year and from 4th to 12th grades, and analysis of genre differences by trait, using prompts from the Criterion topic library aligned with the major modes of writing (exposition, argumentation, narrative, description, process, comparison and contrast, and cause and effect). It demonstrates that many of the traits are about as reliable as overall e‐rater® scores, that the trait model can be used to build models somewhat more closely aligned with human scores than standard e‐rater models, and that there are large, significant trait differences by genre, consistent with genre differences in trait patterns described in the larger literature. Some of the traits demonstrated clear trends between successive revisions. Students using Criterion appear to have consistently improved grammar, usage, and spelling after getting Criterion feedback and to have marginally improved essay organization. Many of the traits also demonstrated clear grade level trends. These features indicate that the trait model could be used to support more detailed scoring and reporting for writing assessments and learning tools.
在形成性论文语料库中塑造写作特质
本文介绍了学生作文中写作质量、语体和体裁差异的多维模型,该模型是通过对 137 万篇提交给 ETS 数字写作服务 Criterion® 的作文进行确认性因子分析而训练和测试得出的。该模型还通过其他几个语料库进行了验证,结果表明它能合理地拟合从四年级到大学的作文数据。它包括每种特质的重测信度分析、学年内和从四年级到十二年级的特质纵向趋势分析,以及使用 Criterion 主题库中与主要写作模式(论述、论证、叙述、描述、过程、比较和对比以及因果关系)相一致的提示,对特质的体裁差异进行分析。结果表明,许多特质与电子评分器®的总分一样可靠,特质模型可以用来建立比标准电子评分器模型更接近人类分数的模型,而且不同体裁的特质差异很大,这与更多文献中描述的特质模式的体裁差异是一致的。一些特质在连续修订之间表现出明显的趋势。使用《标准》的学生在获得《标准》反馈后,语法、用法和拼写似乎都有了持续的改善,文章组织也略有提高。许多特质还表现出明显的年级趋势。这些特点表明,特质模型可用于为写作评估和学习工具提供更详细的评分和报告。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ETS Research Report Series
ETS Research Report Series Social Sciences-Education
CiteScore
1.20
自引率
0.00%
发文量
17
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信