{"title":"Evaluating the Relationship Between Course Structure, Learner Activity, and Perceived Value of Online Courses","authors":"Ido Roll, Leah P. Macfadyen, Debra Sandilands","doi":"10.1145/2724660.2728699","DOIUrl":"https://doi.org/10.1145/2724660.2728699","url":null,"abstract":"Using aggregated Learning Management System data and course evaluation data from 26 online courses, we evaluated the relationship between measures of online activity, course and assessment structure, and student perceptions of course value. We find relationships between selected dimensions of learner engagement that reflect current constructivist theories of learning. This work demonstrates the potential value of pooled, easily accessible, and anonymous data for high-level inferences regarding design of online courses and the learner experience.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86150214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soon Hau Chua, Juho Kim, T. K. Monserrat, Shengdong Zhao
{"title":"Understanding Learners' General Perception Towards Learning with MOOC Classmates: An Exploratory Study","authors":"Soon Hau Chua, Juho Kim, T. K. Monserrat, Shengdong Zhao","doi":"10.1145/2724660.2728680","DOIUrl":"https://doi.org/10.1145/2724660.2728680","url":null,"abstract":"In this work-in-progress, we present our preliminary findings from an exploratory study on understanding learners' general behavior and perception towards learning with classmates in MOOCs. One-on-one semi-structured interview designed with grounded theory method was conducted with seven MOOC learners. Initial analysis of the interview data revealed several interesting insights on learners' behavior in working with other learners in MOOCs. We intend to expand the findings in future work to derive design implications for incorporating collaborative features into MOOCs.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88466940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marti A. Hearst, A. Fox, Derrick Coetzee, Bjoern Hartmann
{"title":"All It Takes Is One: Evidence for a Strategy for Seeding Large Scale Peer Learning Interactions","authors":"Marti A. Hearst, A. Fox, Derrick Coetzee, Bjoern Hartmann","doi":"10.1145/2724660.2728698","DOIUrl":"https://doi.org/10.1145/2724660.2728698","url":null,"abstract":"The results of a study of online peer learning suggests that it may be advantageous to automatically assign students to small peer learning groups based on how many students initially get answers to questions correct.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73094551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Na Li, Krzysztof Z Gajos, K. Nakayama, Ryan D. Enos
{"title":"TELLab: An Experiential Learning Tool for Psychology","authors":"Na Li, Krzysztof Z Gajos, K. Nakayama, Ryan D. Enos","doi":"10.1145/2724660.2728678","DOIUrl":"https://doi.org/10.1145/2724660.2728678","url":null,"abstract":"In this paper, we discuss current practices and challenges of teaching psychology experiments. We review experiential learning and analogical learning pedagogies, which have informed the design of TELLab, an online platform for supporting effective experiential learning of psychology concepts.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76892346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ming Zhang, Jile Zhu, Yanzhen Zou, Hongfei Yan, Dan Hao, Chuxiong Liu
{"title":"Educational Evaluation in the PKU SPOC Course \"Data Structures and Algorithms\"","authors":"Ming Zhang, Jile Zhu, Yanzhen Zou, Hongfei Yan, Dan Hao, Chuxiong Liu","doi":"10.1145/2724660.2728666","DOIUrl":"https://doi.org/10.1145/2724660.2728666","url":null,"abstract":"In order to learn the impact of MOOCs, we conducted a SPOC experiment on the course of Data Structures and Algorithms in Peking University. In this paper, we analyze student online activities, test scores, and two surveys using statistical methods (t-test, analysis of variance, correlation analysis and OLS regression) to understand what factors will foster improvements in student learning. We find that the \"SPOC + Flipped\" is a helpful mode to teach algorithm, time spent on the course and students' confidence had a positive impact on learning effect, and SPOC resource should be made full use of.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76312879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diyi Yang, Miaomiao Wen, I. Howley, R. Kraut, C. Rosé
{"title":"Exploring the Effect of Confusion in Discussion Forums of Massive Open Online Courses","authors":"Diyi Yang, Miaomiao Wen, I. Howley, R. Kraut, C. Rosé","doi":"10.1145/2724660.2724677","DOIUrl":"https://doi.org/10.1145/2724660.2724677","url":null,"abstract":"Thousands of students enroll in Massive Open Online Courses~(MOOCs) to seek opportunities for learning and self-improvement. However, the learning process often involves struggles with confusion, which may have an adverse effect on the course participation experience, leading to dropout along the way. In this paper, we quantify that effect. We describe a classification model using discussion forum behavior and clickstream data to automatically identify posts that express confusion. We then apply survival analysis to quantify the impact of confusion on student dropout. The results demonstrate that the more confusion students express or are exposed to, the lower the probability of their retention. Receiving support and resolution of confusion helps mitigate this effect. We explore the differential effects of confusion expressed in different contexts and related to different aspects of courses. We conclude with implications for design of interventions towards improving the retention of students in MOOCs.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"135 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82947867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Addressing Common Analytic Challenges to Randomized Experiments in MOOCs: Attrition and Zero-Inflation","authors":"Anne Lamb, Jascha Smilack, Andrew D. Ho, J. Reich","doi":"10.1145/2724660.2724669","DOIUrl":"https://doi.org/10.1145/2724660.2724669","url":null,"abstract":"Massive open online course (MOOC) platforms increasingly allow easily implemented randomized experiments. The heterogeneity of MOOC students, however, leads to two methodological obstacles in analyzing interventions to increase engagement. (1) Many MOOC participation metrics have distributions with substantial positive skew from highly active users as well as zero-inflation from high attrition. (2) High attrition means that in some experimental designs, most users assigned to the treatment never receive it; analyses that do not consider attrition result in \"intent-to-treat\" (ITT) estimates that underestimate the true effects of interventions. We address these challenges in analyzing an intervention to improve forum participation in the 2014 JusticeX course offered on the edX MOOC platform. We compare the results of four ITT models (OLS, logistic, quantile, and zero-inflated negative binomial regressions) and three \"treatment-on-treated\" (TOT) models (Wald estimator, 2SLS with a second stage logistic model, and instrumental variables quantile regression). A combination of logistic, quantile, and zero-inflated negative binomial regressions provide the most comprehensive description of the ITT effects. TOT methods then adjust the ITT underestimates. Substantively, we demonstrate that self-assessment questions about forum participation encourage more students to engage in forums and increases the participation of already active students.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81169041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Problems Before Solutions: Automated Problem Clarification at Scale","authors":"S. Basu, A. Wu, Brian Hou, John DeNero","doi":"10.1145/2724660.2724679","DOIUrl":"https://doi.org/10.1145/2724660.2724679","url":null,"abstract":"Automatic assessment reduces the need for individual feedback in massive courses, but often focuses only on scoring solutions, rather than assessing whether students correctly understand problems. We present an enriched approach to automatic assessment that explicitly assists students in understanding the detailed specification of technical problems that they are asked to solve, in addition to evaluating their solutions. Students are given a suite of solution test cases, but they must first unlock each test case by validating its behavior before they are allowed to apply it to their proposed solution. When provided with this automated feedback early in the problem-solving process, students ask fewer clarificatory questions and express less confusion about assessments. As a result, instructors spend less time explaining problems to students. In a 1300-person university course, we observed that the vast majority of students chose to validate their understanding of test cases before attempting to solve problems. These students reported that the validation process improved their understanding.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87039920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Ordinal Peer Grading","authors":"Karthik Raman, T. Joachims","doi":"10.1145/2724660.2724678","DOIUrl":"https://doi.org/10.1145/2724660.2724678","url":null,"abstract":"Massive Online Open Courses have become an accessible and affordable choice for education. This has led to new technical challenges for instructors such as student evaluation at scale. Recent work has found ordinal peer grading}, where individual grader orderings are aggregated into an overall ordering of assignments, to be a viable alternate to traditional instructor/staff evaluation [23]. Existing techniques, which extend rank-aggregation methods, produce a single ordering as output. While these rankings have been found to be an accurate reflection of assignment quality on average, they do not communicate any of the uncertainty inherent in the assessment process. In particular, they do not to provide instructors with an estimate of the uncertainty of each assignment's position in the ranking. In this work, we tackle this problem by applying Bayesian techniques to the ordinal peer grading problem, using MCMC-based sampling techniques in conjunction with the Mallows model. Experiments are performed on real-world peer grading datasets, which demonstrate that the proposed method provides accurate uncertainty information via the estimated posterior distributions.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"118 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87629099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew E. Waters, David Tinapple, Richard Baraniuk
{"title":"BayesRank: A Bayesian Approach to Ranked Peer Grading","authors":"Andrew E. Waters, David Tinapple, Richard Baraniuk","doi":"10.1145/2724660.2724672","DOIUrl":"https://doi.org/10.1145/2724660.2724672","url":null,"abstract":"Advances in online and computer supported education afford exciting opportunities to revolutionize the classroom, while also presenting a number of new challenges not faced in traditional educational settings. Foremost among these challenges is the problem of accurately and efficiently evaluating learner work as the class size grows, which is directly related to the larger goal of providing quality, timely, and actionable formative feedback. Recently there has been a surge in interest in using peer grading methods coupled with machine learning to accurately and fairly evaluate learner work while alleviating the instructor bottleneck and grading overload. Prior work in peer grading almost exclusively focuses on numerically scored grades -- either real-valued or ordinal. In this work, we consider the implications of peer ranking in which learners rank a small subset of peer work from strongest to weakest, and propose new types of computational analyses that can be applied to this ranking data. We adopt a Bayesian approach to the ranked peer grading problem and develop a novel model and method for utilizing ranked peer-grading data. We additionally develop a novel procedure for adaptively identifying which work should be ranked by particular peers in order to dynamically resolve ambiguity in the data and rapidly resolve a clearer picture of learner performance. We showcase our results on both synthetic and several real-world educational datasets.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85998028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}