Employing Peer Review to Evaluate the Quality of Student Generated Content at Scale: A Trust Propagation Approach

A. Darvishi, Hassan Khosravi, S. Sadiq
{"title":"Employing Peer Review to Evaluate the Quality of Student Generated Content at Scale: A Trust Propagation Approach","authors":"A. Darvishi, Hassan Khosravi, S. Sadiq","doi":"10.1145/3430895.3460129","DOIUrl":null,"url":null,"abstract":"Engaging students in the creation of learning resources has been demonstrated to have pedagogical benefits and lead to the creation of large repositories of learning resources which can be used to complement student learning in different ways. However, to effectively utilise a learnersourced repository of content, a selection process is needed to separate high-quality from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach to evaluating the quality of learnersourced content is to use a peer review process where students are asked to assess the quality of resources authored by their peers. However, this method poses the problem of \"truth inference\" since the judgements of students as experts-in-training cannot wholly be trusted. This paper presents a graph-based approach to propagate the reliability and trust using data from peer and instructor evaluations in order to simultaneously infer the quality of the learnersourced content and the reliability and trustworthiness of users in a live setting. We use empirical data from a learnersourcing system called RiPPLE to evaluate our approach. Results demonstrate that the proposed approach can propagate reliability and utilise the limited availability of instructors in spot-checking to improve the accuracy of the model compared to baseline models and the current model used in the system.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Eighth ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3430895.3460129","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Engaging students in the creation of learning resources has been demonstrated to have pedagogical benefits and lead to the creation of large repositories of learning resources which can be used to complement student learning in different ways. However, to effectively utilise a learnersourced repository of content, a selection process is needed to separate high-quality from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach to evaluating the quality of learnersourced content is to use a peer review process where students are asked to assess the quality of resources authored by their peers. However, this method poses the problem of "truth inference" since the judgements of students as experts-in-training cannot wholly be trusted. This paper presents a graph-based approach to propagate the reliability and trust using data from peer and instructor evaluations in order to simultaneously infer the quality of the learnersourced content and the reliability and trustworthiness of users in a live setting. We use empirical data from a learnersourcing system called RiPPLE to evaluate our approach. Results demonstrate that the proposed approach can propagate reliability and utilise the limited availability of instructors in spot-checking to improve the accuracy of the model compared to baseline models and the current model used in the system.
采用同行评议来评估学生生成内容的质量:一种信任传播方法
让学生参与学习资源的创建已被证明具有教学效益,并导致创建大型学习资源库,这些资源库可用于以不同方式补充学生的学习。然而,为了有效地利用学习者来源的内容存储库,需要一个选择过程来区分高质量和低质量的资源,因为学生创建的一些资源可能是无效的、不合适的或不正确的。评估学习者来源内容质量的一种常见且可扩展的方法是使用同行评审过程,要求学生评估其同行撰写的资源的质量。然而,这种方法提出了“真理推理”的问题,因为学生作为培训专家的判断不能完全可信。本文提出了一种基于图的方法,利用来自同伴和教师评估的数据来传播可靠性和信任度,以便同时推断学习者来源内容的质量以及用户在现场设置中的可靠性和可信度。我们使用来自学习者资源系统RiPPLE的经验数据来评估我们的方法。结果表明,与基线模型和系统中使用的当前模型相比,该方法可以传播可靠性,并利用指导员在抽查中的有限可用性来提高模型的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信