{"title":"采用同行评议来评估学生生成内容的质量:一种信任传播方法","authors":"A. Darvishi, Hassan Khosravi, S. Sadiq","doi":"10.1145/3430895.3460129","DOIUrl":null,"url":null,"abstract":"Engaging students in the creation of learning resources has been demonstrated to have pedagogical benefits and lead to the creation of large repositories of learning resources which can be used to complement student learning in different ways. However, to effectively utilise a learnersourced repository of content, a selection process is needed to separate high-quality from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach to evaluating the quality of learnersourced content is to use a peer review process where students are asked to assess the quality of resources authored by their peers. However, this method poses the problem of \"truth inference\" since the judgements of students as experts-in-training cannot wholly be trusted. This paper presents a graph-based approach to propagate the reliability and trust using data from peer and instructor evaluations in order to simultaneously infer the quality of the learnersourced content and the reliability and trustworthiness of users in a live setting. We use empirical data from a learnersourcing system called RiPPLE to evaluate our approach. Results demonstrate that the proposed approach can propagate reliability and utilise the limited availability of instructors in spot-checking to improve the accuracy of the model compared to baseline models and the current model used in the system.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Employing Peer Review to Evaluate the Quality of Student Generated Content at Scale: A Trust Propagation Approach\",\"authors\":\"A. Darvishi, Hassan Khosravi, S. Sadiq\",\"doi\":\"10.1145/3430895.3460129\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Engaging students in the creation of learning resources has been demonstrated to have pedagogical benefits and lead to the creation of large repositories of learning resources which can be used to complement student learning in different ways. However, to effectively utilise a learnersourced repository of content, a selection process is needed to separate high-quality from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach to evaluating the quality of learnersourced content is to use a peer review process where students are asked to assess the quality of resources authored by their peers. However, this method poses the problem of \\\"truth inference\\\" since the judgements of students as experts-in-training cannot wholly be trusted. This paper presents a graph-based approach to propagate the reliability and trust using data from peer and instructor evaluations in order to simultaneously infer the quality of the learnersourced content and the reliability and trustworthiness of users in a live setting. We use empirical data from a learnersourcing system called RiPPLE to evaluate our approach. Results demonstrate that the proposed approach can propagate reliability and utilise the limited availability of instructors in spot-checking to improve the accuracy of the model compared to baseline models and the current model used in the system.\",\"PeriodicalId\":125581,\"journal\":{\"name\":\"Proceedings of the Eighth ACM Conference on Learning @ Scale\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Eighth ACM Conference on Learning @ Scale\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3430895.3460129\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Eighth ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3430895.3460129","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Employing Peer Review to Evaluate the Quality of Student Generated Content at Scale: A Trust Propagation Approach
Engaging students in the creation of learning resources has been demonstrated to have pedagogical benefits and lead to the creation of large repositories of learning resources which can be used to complement student learning in different ways. However, to effectively utilise a learnersourced repository of content, a selection process is needed to separate high-quality from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach to evaluating the quality of learnersourced content is to use a peer review process where students are asked to assess the quality of resources authored by their peers. However, this method poses the problem of "truth inference" since the judgements of students as experts-in-training cannot wholly be trusted. This paper presents a graph-based approach to propagate the reliability and trust using data from peer and instructor evaluations in order to simultaneously infer the quality of the learnersourced content and the reliability and trustworthiness of users in a live setting. We use empirical data from a learnersourcing system called RiPPLE to evaluate our approach. Results demonstrate that the proposed approach can propagate reliability and utilise the limited availability of instructors in spot-checking to improve the accuracy of the model compared to baseline models and the current model used in the system.