通过同侪评估估计学生成绩作为众包校准问题

Yunkai Xiao, Yinan Gao, Chuhuai Yue, E. Gehringer
{"title":"通过同侪评估估计学生成绩作为众包校准问题","authors":"Yunkai Xiao, Yinan Gao, Chuhuai Yue, E. Gehringer","doi":"10.1109/ITHET56107.2022.10031993","DOIUrl":null,"url":null,"abstract":"There is a trend to move education into an online environment, especially when offline learning is restricted by time, space, availability, or is impacted by issues such as a public health incident. Evaluating students’ performance in online education has always been challenging. Objective questions, which can be graded automatically, could only assess certain aspects of students’ mastery of knowledge. A grading problem appears if subjective questions exist, primarily when the class is taught at scale. Many online education platforms have been using peer assessment to resolve this problem. Aside from that, peer assessment also improves interactions between students, instructors, and peers. While peer assessment has some inherent weaknesses, reviewers may not have the same ability or attitude toward reviewing others, and the feedback generated by them shall not be taken at face value. Many algorithms have been developed to evaluate annotators’ trustworthiness and generate reliable labels in the crowdsourcing industry. We proposed an algorithm under the same concept that could provide accurate automated grading, an overview of students’ weaknesses from peer feedback, and identify reviewers who lack an understanding of certain concepts. This information allows instructors to offer targeted training and create data-driven lesson plans.","PeriodicalId":125795,"journal":{"name":"2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Estimating Student Grades through Peer Assessment as a Crowdsourcing Calibration Problem\",\"authors\":\"Yunkai Xiao, Yinan Gao, Chuhuai Yue, E. Gehringer\",\"doi\":\"10.1109/ITHET56107.2022.10031993\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is a trend to move education into an online environment, especially when offline learning is restricted by time, space, availability, or is impacted by issues such as a public health incident. Evaluating students’ performance in online education has always been challenging. Objective questions, which can be graded automatically, could only assess certain aspects of students’ mastery of knowledge. A grading problem appears if subjective questions exist, primarily when the class is taught at scale. Many online education platforms have been using peer assessment to resolve this problem. Aside from that, peer assessment also improves interactions between students, instructors, and peers. While peer assessment has some inherent weaknesses, reviewers may not have the same ability or attitude toward reviewing others, and the feedback generated by them shall not be taken at face value. Many algorithms have been developed to evaluate annotators’ trustworthiness and generate reliable labels in the crowdsourcing industry. We proposed an algorithm under the same concept that could provide accurate automated grading, an overview of students’ weaknesses from peer feedback, and identify reviewers who lack an understanding of certain concepts. This information allows instructors to offer targeted training and create data-driven lesson plans.\",\"PeriodicalId\":125795,\"journal\":{\"name\":\"2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET)\",\"volume\":\"79 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITHET56107.2022.10031993\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITHET56107.2022.10031993","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

有一种趋势是将教育转移到在线环境,特别是当离线学习受到时间、空间、可用性的限制或受到公共卫生事件等问题的影响时。评估学生在网络教育中的表现一直是一个挑战。客观问题,可以自动评分,只能评估学生对知识掌握的某些方面。如果存在主观问题,评分问题就会出现,主要是在班级规模教学时。许多在线教育平台一直在使用同伴评估来解决这个问题。除此之外,同伴评估还可以改善学生、教师和同伴之间的互动。虽然同行评议存在一些固有的弱点,但评议者对他人评议的能力和态度可能不尽相同,不能只看他们的反馈。在众包行业中,已经开发了许多算法来评估注释者的可信度并生成可靠的标签。我们在相同的概念下提出了一种算法,可以提供准确的自动评分,从同行反馈中概述学生的弱点,并识别对某些概念缺乏理解的审稿人。这些信息使教师能够提供有针对性的培训并创建数据驱动的课程计划。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Estimating Student Grades through Peer Assessment as a Crowdsourcing Calibration Problem
There is a trend to move education into an online environment, especially when offline learning is restricted by time, space, availability, or is impacted by issues such as a public health incident. Evaluating students’ performance in online education has always been challenging. Objective questions, which can be graded automatically, could only assess certain aspects of students’ mastery of knowledge. A grading problem appears if subjective questions exist, primarily when the class is taught at scale. Many online education platforms have been using peer assessment to resolve this problem. Aside from that, peer assessment also improves interactions between students, instructors, and peers. While peer assessment has some inherent weaknesses, reviewers may not have the same ability or attitude toward reviewing others, and the feedback generated by them shall not be taken at face value. Many algorithms have been developed to evaluate annotators’ trustworthiness and generate reliable labels in the crowdsourcing industry. We proposed an algorithm under the same concept that could provide accurate automated grading, an overview of students’ weaknesses from peer feedback, and identify reviewers who lack an understanding of certain concepts. This information allows instructors to offer targeted training and create data-driven lesson plans.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信