有什么意义?分数如何影响开放式作业的书面评论

P. Crain, B. Bailey
{"title":"有什么意义?分数如何影响开放式作业的书面评论","authors":"P. Crain, B. Bailey","doi":"10.1145/3430895.3460132","DOIUrl":null,"url":null,"abstract":"Scaling assessments typically relies on quantifying work quality, yet written comments are the assessment method of choice for open-ended work. Existing scalable solutions compromise by mapping quality scores to pre-authored comments, but how scores influence interpretation of these comments is not well understood. We report results from a study of how 441 participants authored and revised short stories in response to a score, written comments, both types of feedback, or no feedback. We analyzed data from the story-writing task and two surveys to determine task and feedback satisfaction, revision depth and effort, and improvement between drafts for each participant. We found task satisfaction and task performance were positively correlated among participants who were shown a score. Feedback satisfaction, revision effort, and improvement were highest among participants shown written comments. Either type of feedback prompted more deep revisions than no feedback, but together elicited fewer deep revisions than written comments alone. Our work informs the design of scalable open-ended assessment systems by contributing insights regarding how scores influence perceptions of written feedback and subsequent revision outcomes.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"What's the Point? How Scores Undermine Written Comments on Open-Ended Work\",\"authors\":\"P. Crain, B. Bailey\",\"doi\":\"10.1145/3430895.3460132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Scaling assessments typically relies on quantifying work quality, yet written comments are the assessment method of choice for open-ended work. Existing scalable solutions compromise by mapping quality scores to pre-authored comments, but how scores influence interpretation of these comments is not well understood. We report results from a study of how 441 participants authored and revised short stories in response to a score, written comments, both types of feedback, or no feedback. We analyzed data from the story-writing task and two surveys to determine task and feedback satisfaction, revision depth and effort, and improvement between drafts for each participant. We found task satisfaction and task performance were positively correlated among participants who were shown a score. Feedback satisfaction, revision effort, and improvement were highest among participants shown written comments. Either type of feedback prompted more deep revisions than no feedback, but together elicited fewer deep revisions than written comments alone. Our work informs the design of scalable open-ended assessment systems by contributing insights regarding how scores influence perceptions of written feedback and subsequent revision outcomes.\",\"PeriodicalId\":125581,\"journal\":{\"name\":\"Proceedings of the Eighth ACM Conference on Learning @ Scale\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Eighth ACM Conference on Learning @ Scale\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3430895.3460132\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Eighth ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3430895.3460132","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

规模评估通常依赖于量化工作质量,而书面评论是开放式工作的评估方法选择。现有的可伸缩解决方案通过将质量分数映射到预编写的注释来妥协,但是分数如何影响这些注释的解释还没有得到很好的理解。我们报告了一项研究的结果,研究了441名参与者如何根据评分、书面评论、两种类型的反馈或没有反馈来撰写和修改短篇小说。我们分析了来自故事写作任务和两次调查的数据,以确定每个参与者的任务和反馈满意度、修改深度和努力,以及草稿之间的改进。我们发现,在获得分数的参与者中,任务满意度和任务绩效呈正相关。反馈满意度、修改努力和改进在看到书面评论的参与者中最高。任何一种反馈都比没有反馈更能促使深入的修改,但两者加在一起比单独的书面评论更少。我们的工作通过提供关于分数如何影响书面反馈和随后修订结果的看法的见解,为可扩展的开放式评估系统的设计提供了信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
What's the Point? How Scores Undermine Written Comments on Open-Ended Work
Scaling assessments typically relies on quantifying work quality, yet written comments are the assessment method of choice for open-ended work. Existing scalable solutions compromise by mapping quality scores to pre-authored comments, but how scores influence interpretation of these comments is not well understood. We report results from a study of how 441 participants authored and revised short stories in response to a score, written comments, both types of feedback, or no feedback. We analyzed data from the story-writing task and two surveys to determine task and feedback satisfaction, revision depth and effort, and improvement between drafts for each participant. We found task satisfaction and task performance were positively correlated among participants who were shown a score. Feedback satisfaction, revision effort, and improvement were highest among participants shown written comments. Either type of feedback prompted more deep revisions than no feedback, but together elicited fewer deep revisions than written comments alone. Our work informs the design of scalable open-ended assessment systems by contributing insights regarding how scores influence perceptions of written feedback and subsequent revision outcomes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信