M. Montebello, Petrilson Pinheiro, B. Cope, M. Kalantzis, Tabassum Amina, Duane Searsmith, D. Cao
{"title":"The impact of the peer review process evolution on learner performance in e-learning environments","authors":"M. Montebello, Petrilson Pinheiro, B. Cope, M. Kalantzis, Tabassum Amina, Duane Searsmith, D. Cao","doi":"10.1145/3231644.3231693","DOIUrl":null,"url":null,"abstract":"Student performance over a course of an academic program can be significantly affected and positively influenced through a series of feedback processes by peers and tutors. Ideally, this feedback is structured and incremental, and as a consequence, data presents at large scale even in relatively small classes. In this paper, we investigate the effect of such processes as we analyze assessment data collected from online courses. We plan to fully analyze the massive dataset of over three and a half million granular data points generated to make the case for the scalability of these kinds of learning analytics. This could shed crucial light on assessment mechanism in MOOCs, as we continue to refine our processes in an effort to strike a balance of emphasis on formative in addition to summative assessment.","PeriodicalId":20634,"journal":{"name":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","volume":"108 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Fifth Annual ACM Conference on Learning at Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3231644.3231693","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Student performance over a course of an academic program can be significantly affected and positively influenced through a series of feedback processes by peers and tutors. Ideally, this feedback is structured and incremental, and as a consequence, data presents at large scale even in relatively small classes. In this paper, we investigate the effect of such processes as we analyze assessment data collected from online courses. We plan to fully analyze the massive dataset of over three and a half million granular data points generated to make the case for the scalability of these kinds of learning analytics. This could shed crucial light on assessment mechanism in MOOCs, as we continue to refine our processes in an effort to strike a balance of emphasis on formative in addition to summative assessment.