写作质量预测模型:整合寄存器相关因素

IF 1.9 1区 文学 Q2 COMMUNICATION
Heqiao Wang, G. A. Troia
{"title":"写作质量预测模型:整合寄存器相关因素","authors":"Heqiao Wang, G. A. Troia","doi":"10.1177/07410883231185287","DOIUrl":null,"url":null,"abstract":"The primary purpose of this study is to investigate the degree to which register knowledge, register-specific motivation, and diverse linguistic features are predictive of human judgment of writing quality in three registers—narrative, informative, and opinion. The secondary purpose is to compare the evaluation metrics of register-partitioned automated writing evaluation models in three conditions: (1) register-related factors alone, (2) linguistic features alone, and (3) the combination of these two. A total of 1006 essays (n = 327, 342, and 337 for informative, narrative, and opinion, respectively) written by 92 fourth- and fifth-graders were examined. A series of hierarchical linear regression analyses controlling for the effects of demographics were conducted to select the most useful features to capture text quality, scored by humans, in the three registers. These features were in turn entered into automated writing evaluation predictive models with tuning of the parameters in a tenfold cross-validation procedure. The average validity coefficients (i.e., quadratic-weighed kappa, Pearson correlation r, standardized mean score difference, score deviation analysis) were computed. The results demonstrate that (1) diverse feature sets are utilized to predict quality in the three registers, and (2) the combination of register-related factors and linguistic features increases the accuracy and validity of all human and automated scoring models, especially for the registers of informative and opinion writing. The findings from this study suggest that students’ register knowledge and register-specific motivation add additional predictive information when evaluating writing quality across registers beyond that afforded by linguistic features of the paper itself, whether using human scoring or automated evaluation. These findings have practical implications for educational practitioners and scholars in that they can help strengthen consideration of register-specific writing skills and cognitive and motivational forces that are essential components of effective writing instruction and assessment.","PeriodicalId":47351,"journal":{"name":"Written Communication","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2023-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Writing Quality Predictive Modeling: Integrating Register-Related Factors\",\"authors\":\"Heqiao Wang, G. A. Troia\",\"doi\":\"10.1177/07410883231185287\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The primary purpose of this study is to investigate the degree to which register knowledge, register-specific motivation, and diverse linguistic features are predictive of human judgment of writing quality in three registers—narrative, informative, and opinion. The secondary purpose is to compare the evaluation metrics of register-partitioned automated writing evaluation models in three conditions: (1) register-related factors alone, (2) linguistic features alone, and (3) the combination of these two. A total of 1006 essays (n = 327, 342, and 337 for informative, narrative, and opinion, respectively) written by 92 fourth- and fifth-graders were examined. A series of hierarchical linear regression analyses controlling for the effects of demographics were conducted to select the most useful features to capture text quality, scored by humans, in the three registers. These features were in turn entered into automated writing evaluation predictive models with tuning of the parameters in a tenfold cross-validation procedure. The average validity coefficients (i.e., quadratic-weighed kappa, Pearson correlation r, standardized mean score difference, score deviation analysis) were computed. The results demonstrate that (1) diverse feature sets are utilized to predict quality in the three registers, and (2) the combination of register-related factors and linguistic features increases the accuracy and validity of all human and automated scoring models, especially for the registers of informative and opinion writing. The findings from this study suggest that students’ register knowledge and register-specific motivation add additional predictive information when evaluating writing quality across registers beyond that afforded by linguistic features of the paper itself, whether using human scoring or automated evaluation. These findings have practical implications for educational practitioners and scholars in that they can help strengthen consideration of register-specific writing skills and cognitive and motivational forces that are essential components of effective writing instruction and assessment.\",\"PeriodicalId\":47351,\"journal\":{\"name\":\"Written Communication\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Written Communication\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1177/07410883231185287\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Written Communication","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/07410883231185287","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0

摘要

本研究的主要目的是调查语域知识、语域特定动机和不同的语言特征在多大程度上预测人类在叙事、信息和观点三个语域中对写作质量的判断。第二个目的是在三个条件下比较语域划分的自动写作评估模型的评估指标:(1)单独的语域相关因素,(2)单独的语言特征,以及(3)这两者的组合。共1006篇论文(n = 327、342和337分别为信息性、叙述性和观点性)。进行了一系列控制人口统计影响的分层线性回归分析,以选择最有用的特征来捕捉三个寄存器中由人类评分的文本质量。这些特征又被输入到自动写作评估预测模型中,并在十倍交叉验证程序中调整参数。计算平均有效性系数(即二次加权kappa、Pearson相关r、标准化平均分差、得分偏差分析)。结果表明:(1)不同的特征集被用来预测三个语域的质量,(2)语域相关因素和语言特征的结合提高了所有人类和自动评分模型的准确性和有效性,尤其是对于信息写作和观点写作的语域。这项研究的结果表明,无论是使用人工评分还是自动评估,在评估跨语域的写作质量时,学生的语域知识和语域特定动机都会增加额外的预测信息,而不是论文本身的语言特征。这些发现对教育从业者和学者具有实际意义,因为它们可以帮助加强对注册特定写作技能以及认知和动机的考虑,而这些是有效写作指导和评估的重要组成部分。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Writing Quality Predictive Modeling: Integrating Register-Related Factors
The primary purpose of this study is to investigate the degree to which register knowledge, register-specific motivation, and diverse linguistic features are predictive of human judgment of writing quality in three registers—narrative, informative, and opinion. The secondary purpose is to compare the evaluation metrics of register-partitioned automated writing evaluation models in three conditions: (1) register-related factors alone, (2) linguistic features alone, and (3) the combination of these two. A total of 1006 essays (n = 327, 342, and 337 for informative, narrative, and opinion, respectively) written by 92 fourth- and fifth-graders were examined. A series of hierarchical linear regression analyses controlling for the effects of demographics were conducted to select the most useful features to capture text quality, scored by humans, in the three registers. These features were in turn entered into automated writing evaluation predictive models with tuning of the parameters in a tenfold cross-validation procedure. The average validity coefficients (i.e., quadratic-weighed kappa, Pearson correlation r, standardized mean score difference, score deviation analysis) were computed. The results demonstrate that (1) diverse feature sets are utilized to predict quality in the three registers, and (2) the combination of register-related factors and linguistic features increases the accuracy and validity of all human and automated scoring models, especially for the registers of informative and opinion writing. The findings from this study suggest that students’ register knowledge and register-specific motivation add additional predictive information when evaluating writing quality across registers beyond that afforded by linguistic features of the paper itself, whether using human scoring or automated evaluation. These findings have practical implications for educational practitioners and scholars in that they can help strengthen consideration of register-specific writing skills and cognitive and motivational forces that are essential components of effective writing instruction and assessment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Written Communication
Written Communication COMMUNICATION-
CiteScore
3.90
自引率
15.80%
发文量
20
期刊介绍: Written Communication is an international multidisciplinary journal that publishes theory and research in writing from fields including anthropology, English, education, history, journalism, linguistics, psychology, and rhetoric. Among topics of interest are the nature of writing ability; the assessment of writing; the impact of technology on writing (and the impact of writing on technology); the social and political consequences of writing and writing instruction; nonacademic writing; literacy (including workplace and emergent literacy and the effects of classroom processes on literacy development); the social construction of knowledge; the nature of writing in disciplinary and professional domains.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信