{"title":"Evaluating the Relationship Between Course Structure, Learner Activity, and Perceived Value of Online Courses","authors":"Ido Roll, Leah P. Macfadyen, Debra Sandilands","doi":"10.1145/2724660.2728699","DOIUrl":"https://doi.org/10.1145/2724660.2728699","url":null,"abstract":"Using aggregated Learning Management System data and course evaluation data from 26 online courses, we evaluated the relationship between measures of online activity, course and assessment structure, and student perceptions of course value. We find relationships between selected dimensions of learner engagement that reflect current constructivist theories of learning. This work demonstrates the potential value of pooled, easily accessible, and anonymous data for high-level inferences regarding design of online courses and the learner experience.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86150214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soon Hau Chua, Juho Kim, T. K. Monserrat, Shengdong Zhao
{"title":"Understanding Learners' General Perception Towards Learning with MOOC Classmates: An Exploratory Study","authors":"Soon Hau Chua, Juho Kim, T. K. Monserrat, Shengdong Zhao","doi":"10.1145/2724660.2728680","DOIUrl":"https://doi.org/10.1145/2724660.2728680","url":null,"abstract":"In this work-in-progress, we present our preliminary findings from an exploratory study on understanding learners' general behavior and perception towards learning with classmates in MOOCs. One-on-one semi-structured interview designed with grounded theory method was conducted with seven MOOC learners. Initial analysis of the interview data revealed several interesting insights on learners' behavior in working with other learners in MOOCs. We intend to expand the findings in future work to derive design implications for incorporating collaborative features into MOOCs.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88466940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew E. Waters, David Tinapple, Richard Baraniuk
{"title":"BayesRank: A Bayesian Approach to Ranked Peer Grading","authors":"Andrew E. Waters, David Tinapple, Richard Baraniuk","doi":"10.1145/2724660.2724672","DOIUrl":"https://doi.org/10.1145/2724660.2724672","url":null,"abstract":"Advances in online and computer supported education afford exciting opportunities to revolutionize the classroom, while also presenting a number of new challenges not faced in traditional educational settings. Foremost among these challenges is the problem of accurately and efficiently evaluating learner work as the class size grows, which is directly related to the larger goal of providing quality, timely, and actionable formative feedback. Recently there has been a surge in interest in using peer grading methods coupled with machine learning to accurately and fairly evaluate learner work while alleviating the instructor bottleneck and grading overload. Prior work in peer grading almost exclusively focuses on numerically scored grades -- either real-valued or ordinal. In this work, we consider the implications of peer ranking in which learners rank a small subset of peer work from strongest to weakest, and propose new types of computational analyses that can be applied to this ranking data. We adopt a Bayesian approach to the ranked peer grading problem and develop a novel model and method for utilizing ranked peer-grading data. We additionally develop a novel procedure for adaptively identifying which work should be ranked by particular peers in order to dynamically resolve ambiguity in the data and rapidly resolve a clearer picture of learner performance. We showcase our results on both synthetic and several real-world educational datasets.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85998028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Peers in MOOCs: Lessons Based on the Education Production Function, Collective Action, and an Experiment","authors":"B. Williams","doi":"10.1145/2724660.2728677","DOIUrl":"https://doi.org/10.1145/2724660.2728677","url":null,"abstract":"Economic theory about peers can help learning scientists and designers scale their work from the scale of small classrooms to limitless learning experiences. I propose: 1. We may increase productivity in online learning by changing technologies around peers; many structures around peers can scale with class size. 2. It is not always in students' best interests to be good peers, and collective action failures may worsen with class size. I conducted an experiment in a NovoEd MOOC for teachers that was motivated by these propositions; it leads to future questions about unintended and emergent effects.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89756011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bernie Randles, Dongwook Yoon, Amy Cheatle, Malte F. Jung, François Guimbretière
{"title":"Supporting Face-to-Face Like Communication Modalities for Asynchronous Assignment Feedback in Math Education","authors":"Bernie Randles, Dongwook Yoon, Amy Cheatle, Malte F. Jung, François Guimbretière","doi":"10.1145/2724660.2728684","DOIUrl":"https://doi.org/10.1145/2724660.2728684","url":null,"abstract":"The digitization of educational course content has proved to be problematic for math instructors due to the lack of quality feedback tools that can accommodate the commenter to efficiently express math formulae and convey descriptions about complex ideas contextualized in situ. This paper proposes that RichReview, a document annotation system which creates inking, voice and deictic gestures on top of the student's submitted work, is a possible formative math feedback solution, because it enables face-to-face like commentary within the contexts of the document at hand. A preliminary qualitative evaluation study conducted while having students receive RichReview feedback showed promise to our approach to enhance the quality of feedback, with the implication that incorporating multi-modal feedback into workflows can be an effective method to address elements of feedback submissions lacking in coursework that has moved online.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87307588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Problems Before Solutions: Automated Problem Clarification at Scale","authors":"S. Basu, A. Wu, Brian Hou, John DeNero","doi":"10.1145/2724660.2724679","DOIUrl":"https://doi.org/10.1145/2724660.2724679","url":null,"abstract":"Automatic assessment reduces the need for individual feedback in massive courses, but often focuses only on scoring solutions, rather than assessing whether students correctly understand problems. We present an enriched approach to automatic assessment that explicitly assists students in understanding the detailed specification of technical problems that they are asked to solve, in addition to evaluating their solutions. Students are given a suite of solution test cases, but they must first unlock each test case by validating its behavior before they are allowed to apply it to their proposed solution. When provided with this automated feedback early in the problem-solving process, students ask fewer clarificatory questions and express less confusion about assessments. As a result, instructors spend less time explaining problems to students. In a 1300-person university course, we observed that the vast majority of students chose to validate their understanding of test cases before attempting to solve problems. These students reported that the validation process improved their understanding.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87039920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Ordinal Peer Grading","authors":"Karthik Raman, T. Joachims","doi":"10.1145/2724660.2724678","DOIUrl":"https://doi.org/10.1145/2724660.2724678","url":null,"abstract":"Massive Online Open Courses have become an accessible and affordable choice for education. This has led to new technical challenges for instructors such as student evaluation at scale. Recent work has found ordinal peer grading}, where individual grader orderings are aggregated into an overall ordering of assignments, to be a viable alternate to traditional instructor/staff evaluation [23]. Existing techniques, which extend rank-aggregation methods, produce a single ordering as output. While these rankings have been found to be an accurate reflection of assignment quality on average, they do not communicate any of the uncertainty inherent in the assessment process. In particular, they do not to provide instructors with an estimate of the uncertainty of each assignment's position in the ranking. In this work, we tackle this problem by applying Bayesian techniques to the ordinal peer grading problem, using MCMC-based sampling techniques in conjunction with the Mallows model. Experiments are performed on real-world peer grading datasets, which demonstrate that the proposed method provides accurate uncertainty information via the estimated posterior distributions.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87629099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Addressing Common Analytic Challenges to Randomized Experiments in MOOCs: Attrition and Zero-Inflation","authors":"Anne Lamb, Jascha Smilack, Andrew D. Ho, J. Reich","doi":"10.1145/2724660.2724669","DOIUrl":"https://doi.org/10.1145/2724660.2724669","url":null,"abstract":"Massive open online course (MOOC) platforms increasingly allow easily implemented randomized experiments. The heterogeneity of MOOC students, however, leads to two methodological obstacles in analyzing interventions to increase engagement. (1) Many MOOC participation metrics have distributions with substantial positive skew from highly active users as well as zero-inflation from high attrition. (2) High attrition means that in some experimental designs, most users assigned to the treatment never receive it; analyses that do not consider attrition result in \"intent-to-treat\" (ITT) estimates that underestimate the true effects of interventions. We address these challenges in analyzing an intervention to improve forum participation in the 2014 JusticeX course offered on the edX MOOC platform. We compare the results of four ITT models (OLS, logistic, quantile, and zero-inflated negative binomial regressions) and three \"treatment-on-treated\" (TOT) models (Wald estimator, 2SLS with a second stage logistic model, and instrumental variables quantile regression). A combination of logistic, quantile, and zero-inflated negative binomial regressions provide the most comprehensive description of the ITT effects. TOT methods then adjust the ITT underestimates. Substantively, we demonstrate that self-assessment questions about forum participation encourage more students to engage in forums and increases the participation of already active students.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81169041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diyi Yang, Miaomiao Wen, I. Howley, R. Kraut, C. Rosé
{"title":"Exploring the Effect of Confusion in Discussion Forums of Massive Open Online Courses","authors":"Diyi Yang, Miaomiao Wen, I. Howley, R. Kraut, C. Rosé","doi":"10.1145/2724660.2724677","DOIUrl":"https://doi.org/10.1145/2724660.2724677","url":null,"abstract":"Thousands of students enroll in Massive Open Online Courses~(MOOCs) to seek opportunities for learning and self-improvement. However, the learning process often involves struggles with confusion, which may have an adverse effect on the course participation experience, leading to dropout along the way. In this paper, we quantify that effect. We describe a classification model using discussion forum behavior and clickstream data to automatically identify posts that express confusion. We then apply survival analysis to quantify the impact of confusion on student dropout. The results demonstrate that the more confusion students express or are exposed to, the lower the probability of their retention. Receiving support and resolution of confusion helps mitigate this effect. We explore the differential effects of confusion expressed in different contexts and related to different aspects of courses. We conclude with implications for design of interventions towards improving the retention of students in MOOCs.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82947867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Na Li, Krzysztof Z Gajos, K. Nakayama, Ryan D. Enos
{"title":"TELLab: An Experiential Learning Tool for Psychology","authors":"Na Li, Krzysztof Z Gajos, K. Nakayama, Ryan D. Enos","doi":"10.1145/2724660.2728678","DOIUrl":"https://doi.org/10.1145/2724660.2728678","url":null,"abstract":"In this paper, we discuss current practices and challenges of teaching psychology experiments. We review experiential learning and analogical learning pedagogies, which have informed the design of TELLab, an online platform for supporting effective experiential learning of psychology concepts.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76892346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}