Peer Assessment in Massive Open Online Courses: Monitoring the Knowledge Assessment Effectiveness

Tatiana I. Popova, D. Kolesova
{"title":"Peer Assessment in Massive Open Online Courses: Monitoring the Knowledge Assessment Effectiveness","authors":"Tatiana I. Popova, D. Kolesova","doi":"10.1145/3373722.3373794","DOIUrl":null,"url":null,"abstract":"Massive Open Online Courses (MOOCs) is an advanced form of education in the modern world. With all the advantages this format has, it is difficult to leave unchanged all the procedures included into the traditional teach-learn-assess training cycle; this difficulty can be explained by the specific form of teaching. Peer assessment is usually used to overcome these objective obstacles, although unmoderated peer assessment is questioned by experts and is not trusted much by students (and thus, accordingly, entails a decreasing motivation as students withhold from assessing performance of their peers and withdraw from learning platforms). The works dedicated to this issue mainly deal with multiple ways to resolve it within the scope of the programmers' approach or the approach of the training platform moderators. Our study deals with the question of whether correctly formulated assignments and a methodologically justified sequence of assignments help to overcome these difficulties. The study was performed with the use of questionnaires for students of various academic years (2017-2019) taking part in one MOOC, and with quantitative and qualitative analysis of the performed assignments for peer assessment for 2017 (within the scope of that session, 947 answers were given to the self-assessment assignments, and 727 answers were given to the assignments for peer assessment). The result of the study was a supported recommendation to authors of the MOOC profile in humanities, wherein the essay was supposed to be the student's expected answer: 1) there should be a clear correlation between the conditions of the assignment setting and the criteria used for assessment of the assignment for peer assessment; 2) self-assessment assignments built on the basis of the same principle as the peer assessment assignments should be included in the course.","PeriodicalId":243162,"journal":{"name":"Proceedings of the XI International Scientific Conference Communicative Strategies of the Information Society","volume":"171 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the XI International Scientific Conference Communicative Strategies of the Information Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3373722.3373794","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Massive Open Online Courses (MOOCs) is an advanced form of education in the modern world. With all the advantages this format has, it is difficult to leave unchanged all the procedures included into the traditional teach-learn-assess training cycle; this difficulty can be explained by the specific form of teaching. Peer assessment is usually used to overcome these objective obstacles, although unmoderated peer assessment is questioned by experts and is not trusted much by students (and thus, accordingly, entails a decreasing motivation as students withhold from assessing performance of their peers and withdraw from learning platforms). The works dedicated to this issue mainly deal with multiple ways to resolve it within the scope of the programmers' approach or the approach of the training platform moderators. Our study deals with the question of whether correctly formulated assignments and a methodologically justified sequence of assignments help to overcome these difficulties. The study was performed with the use of questionnaires for students of various academic years (2017-2019) taking part in one MOOC, and with quantitative and qualitative analysis of the performed assignments for peer assessment for 2017 (within the scope of that session, 947 answers were given to the self-assessment assignments, and 727 answers were given to the assignments for peer assessment). The result of the study was a supported recommendation to authors of the MOOC profile in humanities, wherein the essay was supposed to be the student's expected answer: 1) there should be a clear correlation between the conditions of the assignment setting and the criteria used for assessment of the assignment for peer assessment; 2) self-assessment assignments built on the basis of the same principle as the peer assessment assignments should be included in the course.
大规模网络开放课程中的同伴评估:知识评估效果的监测
大规模在线开放课程(MOOCs)是当今世界一种先进的教育形式。尽管这种形式有很多优点,但很难保持传统的“教-学-评估”培训周期中包含的所有程序不变;这种困难可以用具体的教学形式来解释。同伴评估通常用于克服这些客观障碍,尽管不加节制的同伴评估受到专家的质疑,也不太受学生的信任(因此,随着学生拒绝评估同伴的表现并退出学习平台,动机会下降)。致力于解决这个问题的工作主要涉及程序员方法或培训平台版主方法范围内的多种方法。我们的研究涉及的问题是,正确制定的作业和在方法上合理的作业顺序是否有助于克服这些困难。本研究对参加一门MOOC的不同学年(2017-2019年)的学生进行问卷调查,并对2017年完成的同行评估作业进行定量和定性分析(在该会议范围内,自我评估作业给出了947个答案,同行评估作业给出了727个答案)。这项研究的结果是对人文学科MOOC简介作者的一项支持建议,其中论文应该是学生期望的答案:1)作业设置条件与用于同行评估的作业评估标准之间应该存在明确的相关性;2)课程中应包括基于与同行评估作业相同原则的自我评估作业。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信