PeerBERT: Automated Characterization of Peer Review Comments across Courses

Amber J. Dood, Blair A. Winograd, S. Finkenstaedt-Quinn, A. Gere, G. Shultz
{"title":"PeerBERT: Automated Characterization of Peer Review Comments across Courses","authors":"Amber J. Dood, Blair A. Winograd, S. Finkenstaedt-Quinn, A. Gere, G. Shultz","doi":"10.1145/3506860.3506892","DOIUrl":null,"url":null,"abstract":"Writing-to-learn pedagogies are an evidence-based practice known to aid students in constructing knowledge. Barriers exist for the implementation of such assignments; namely, instructors feel they do not have time to provide each student with feedback. To ease implementation of writing-to-learn assignments at scale, we have incorporated automated peer review, which facilitates peer review without input from the instructor. Participating in peer review can positively impact students’ learning and allow students to receive feedback on their writing. Instructors may want to monitor these peer interactions and gain insight into their students’ understanding using the feedback generated by their peers. To facilitate instructors’ use of the content from students’ peer review comments, we pre-trained a transformer model called PeerBERT. PeerBERT was fine-tuned on several downstream tasks to categorize students’ peer review comments as praise, problem/solution, or verification/summary. The model exhibits high accuracy, even across different peer review prompts, assignments, and courses. Additional downstream tasks label problem/solution peer review comments as one or more types: writing/formatting, missing content/needs elaboration, and incorrect content. This approach can help instructors pinpoint common issues in student writing by parsing out which comments are problem/solution and which type of problem/solution students identify.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"LAK22: 12th International Learning Analytics and Knowledge Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3506860.3506892","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Writing-to-learn pedagogies are an evidence-based practice known to aid students in constructing knowledge. Barriers exist for the implementation of such assignments; namely, instructors feel they do not have time to provide each student with feedback. To ease implementation of writing-to-learn assignments at scale, we have incorporated automated peer review, which facilitates peer review without input from the instructor. Participating in peer review can positively impact students’ learning and allow students to receive feedback on their writing. Instructors may want to monitor these peer interactions and gain insight into their students’ understanding using the feedback generated by their peers. To facilitate instructors’ use of the content from students’ peer review comments, we pre-trained a transformer model called PeerBERT. PeerBERT was fine-tuned on several downstream tasks to categorize students’ peer review comments as praise, problem/solution, or verification/summary. The model exhibits high accuracy, even across different peer review prompts, assignments, and courses. Additional downstream tasks label problem/solution peer review comments as one or more types: writing/formatting, missing content/needs elaboration, and incorrect content. This approach can help instructors pinpoint common issues in student writing by parsing out which comments are problem/solution and which type of problem/solution students identify.
PeerBERT:跨课程的同行评审评论的自动表征
写作学习教学法是一种以证据为基础的实践,以帮助学生构建知识。执行这类任务存在障碍;也就是说,教师觉得他们没有时间给每个学生提供反馈。为了方便大规模实施“写作学习”作业,我们采用了自动同侪审查,这便于同侪审查,而无需教师的输入。参与同行评议可以对学生的学习产生积极的影响,并让学生获得关于他们写作的反馈。教师可能想要监控这些同伴的互动,并利用同伴产生的反馈来了解学生的理解情况。为了方便教师使用学生同行评议的内容,我们预先训练了一个名为PeerBERT的变形模型。PeerBERT对几个下游任务进行了微调,将学生的同行评议评论分类为表扬、问题/解决方案或验证/总结。该模型显示出很高的准确性,即使在不同的同行评审提示、作业和课程中也是如此。附加的下游任务将问题/解决方案同行评审评论标记为一种或多种类型:写作/格式化、缺失内容/需要细化,以及不正确的内容。这种方法可以帮助教师通过分析哪些评论是问题/解决方案以及学生识别的问题/解决方案类型来找出学生写作中的常见问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信