{"title":"Practical Learning Research at Scale","authors":"K. Koedinger","doi":"10.1145/2876034.2876054","DOIUrl":"https://doi.org/10.1145/2876034.2876054","url":null,"abstract":"Massive scale education has emerged through online tools such as Wikipedia, Khan Academy, and MOOCs. The number of students being reached is high, but what about the quality of the educational experience? As we scale learning, we need to scale research to address this question. Such learning research should not just determine whether high quality has been achieved, but it should provide a process for how to reliably produce high quality learning. Scaling practical learning research is as much an opportunity as a problem. The opportunity comes from the fact that online courses are not only good for widespread delivery, but are natural vehicles for data collection and experimental instrumentation. I will provide examples of research done in the context of widely used educational technologies that both contribute interesting scientific findings and have practical implications for increasing the quality of learning at scale.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78409659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
René F. Kizilcec, M. Pérez-Sanagustín, Jorge J. Maldonado
{"title":"Recommending Self-Regulated Learning Strategies Does Not Improve Performance in a MOOC","authors":"René F. Kizilcec, M. Pérez-Sanagustín, Jorge J. Maldonado","doi":"10.1145/2876034.2893378","DOIUrl":"https://doi.org/10.1145/2876034.2893378","url":null,"abstract":"Many committed learners struggle to achieve their goal of completing a Massive Open Online Course (MOOC). This work investigates self-regulated learning (SRL) in MOOCs and tests if encouraging the use of SRL strategies can improve course performance. We asked a group of 17 highly successful learners about their own strategies for how to succeed in a MOOC. Their responses were coded based on a SRL framework and synthesized into seven recommendations. In a randomized experiment, we evaluated the effect of providing those recommendations to learners in the same course (N = 653). Although most learners rated the study tips as very helpful, the intervention did not improve course persistence or achievement. Results suggest that a single SRL prompt at the beginning of the course provides insufficient support. Instead, embedding technological aids that adaptively support SRL throughout the course could better support learners in MOOCs.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86319141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tricia J. Ngoon, Alexander Gamero-Garrido, Scott R. Klemmer
{"title":"Supporting Peer Instruction with Evidence-Based Online Instructional Templates","authors":"Tricia J. Ngoon, Alexander Gamero-Garrido, Scott R. Klemmer","doi":"10.1145/2876034.2893439","DOIUrl":"https://doi.org/10.1145/2876034.2893439","url":null,"abstract":"This work examines whether templates designed from principles of multimedia learning design, and learning sciences research, can support peer instruction in creating more effective educational content on the web. Initial results show that the structure and guidelines within these templates can help novices produce meaningful learning content while improving the overall learning experience. This experiment provides insights into how to design and implement structured outlines online for web users to share learning content, and potentially shift researchers' focus to more learner-centered online education.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85851582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Romero, Rebeca Cerezo, Jose Antonio Espino, Manuel Bermúdez
{"title":"Using Android Wear for Avoiding Procrastination Behaviours in MOOCs","authors":"C. Romero, Rebeca Cerezo, Jose Antonio Espino, Manuel Bermúdez","doi":"10.1145/2876034.2893412","DOIUrl":"https://doi.org/10.1145/2876034.2893412","url":null,"abstract":"This paper introduces a new feature for instructors to communicate with their MOOC learners via SmartWatches in a different way to the traditional e-mails in order to try to avoiding procrastination. We have developed an Android Wear-based SmartWatches application designed for receiving notifications from MOOCs, and a specific section in Google Course Builder interface that allows instructors to configure and send the messages to each user registered in the course. We have evaluated the implementation of our proposal in an Introduction to Philosophy MOOC. The number and percentage of students who did assessments on time, together with their comments in a satisfaction questionnaire present very promising results.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89429588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth A. McBride, Jonathan M. Vitale, H. Gogel, Mario M. Martinez, Z. Pardos, M. Linn
{"title":"Predicting Student Learning using Log Data from Interactive Simulations on Climate Change","authors":"Elizabeth A. McBride, Jonathan M. Vitale, H. Gogel, Mario M. Martinez, Z. Pardos, M. Linn","doi":"10.1145/2876034.2893410","DOIUrl":"https://doi.org/10.1145/2876034.2893410","url":null,"abstract":"Interactive simulations are commonly used tools in technology enhanced education. Simulations can be a powerful tool for allowing students to engage in inquiry, especially in science disciplines. They can help students develop an understanding of complex science phenomena in which multiple variables are at play. Developing models for complex domains, like climate science, is important for learning. Equally important, though, is understanding how students use these simulations. Finding use patterns that lead to learning will allow us to develop better guidance for students who struggle to extract the useful information from the simulation. In this study, we generate features from action log data collected while students interacted with simulations on climate change. We seek to understand what types of features are important for student learning by using regression models to map features onto learning outcomes.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89628157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Distributed Esteemed Endorser Review: A Novel Approach to Participant Assessment in MOOCs","authors":"J. Kay, Tyler J. Nolan, Thomas M. Grello","doi":"10.1145/2876034.2893396","DOIUrl":"https://doi.org/10.1145/2876034.2893396","url":null,"abstract":"One of the most challenging aspects of developing a Massive Open Online Course (MOOC) is designing an accurate method to effectively assess participant knowledge and skills. The Distributed Esteemed Endorser Review (DEER) approach has been developed as an alternative for those MOOCs where traditional approaches to assessment are not appropriate. In DEER, course projects are certified in-person by an \"Esteemed Endorser\", an individual who is typically senior in rank to the student, but is not necessarily an expert in the course content. Not only does DEER provide a means to certify that course goals have been met, it also provides MOOC participants with the opportunity to share information about what they have learned with others at the local level.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81681586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ASSISTments Dataset from Multiple Randomized Controlled Experiments","authors":"Douglas Selent, Thanaporn Patikorn, N. Heffernan","doi":"10.1145/2876034.2893409","DOIUrl":"https://doi.org/10.1145/2876034.2893409","url":null,"abstract":"In this paper, we present a dataset consisting of data generated from 22 previously and currently running randomized controlled experiments inside the ASSIStments online learning platform. This dataset provides data mining opportunities for researchers to analyze ASSISTments data in a convenient format across multiple experiments at the same time. The data preprocessing steps are explained in detail to inform researchers about how this dataset was generated. A list of column descriptions is provided to define the columns in the dataset and a set of summary statistics are presented to briefly describe the dataset.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85877490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fuzz Testing Projects in Massive Courses","authors":"S. Sridhara, Brian Hou, Jeffrey Lu, John DeNero","doi":"10.1145/2876034.2876050","DOIUrl":"https://doi.org/10.1145/2876034.2876050","url":null,"abstract":"Scaffolded projects with automated feedback are core instructional components of many massive courses. In subjects that include programming, feedback is typically provided by test cases constructed manually by the instructor. This paper explores the effectiveness of fuzz testing, a randomized technique for verifying the behavior of programs. In particular, we apply fuzz testing to identify when a student's solution differs in behavior from a reference implementation by randomly exploring the space of legal inputs to a program. Fuzz testing serves as a useful complement to manually constructed tests. Instructors can concentrate on designing targeted tests that focus attention on specific issues while using fuzz testing for comprehensive error checking. In the first project of a 1,400-student introductory computer science course, fuzz testing caught errors that were missed by a suite of targeted test cases for more than 48% of students. As a result, the students dedicated substantially more effort to mastering the nuances of the assignment.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84758329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yan Wang, Korinn S. Ostrow, Seth A. Adjei, N. Heffernan
{"title":"The Opportunity Count Model: A Flexible Approach to Modeling Student Performance","authors":"Yan Wang, Korinn S. Ostrow, Seth A. Adjei, N. Heffernan","doi":"10.1145/2876034.2893382","DOIUrl":"https://doi.org/10.1145/2876034.2893382","url":null,"abstract":"Detailed performance data can be exploited to achieve stronger student models when predicting next problem correctness (NPC) within intelligent tutoring systems. However, the availability and importance of these details may differ significantly when considering opportunity count (OC), or the compounded sequence of problems a student experiences within a skill. Inspired by this intuition, the present study introduces the Opportunity Count Model (OCM), a unique approach to student modeling in which separate models are built for differing OCs rather than creating a blanket model that encompasses all OCs. We use Random Forest (RF), which can be used to indicate feature importance, to construct the OCM by considering detailed performance data within tutor log files. Results suggest that OC is significant when modeling student performance and that detailed performance data varies across OCs.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80322031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristin Stephens-Martinez, An Ju, C. Schoen, John DeNero, A. Fox
{"title":"Identifying Student Misunderstandings using Constructed Responses","authors":"Kristin Stephens-Martinez, An Ju, C. Schoen, John DeNero, A. Fox","doi":"10.1145/2876034.2893395","DOIUrl":"https://doi.org/10.1145/2876034.2893395","url":null,"abstract":"In contrast to multiple-choice or selected response questions, constructed response questions can result in a wide variety of incorrect responses. However, constructed responses are richer in information. We propose a technique for using each student's constructed responses in order to identify a subset of their stable conceptual misunderstandings. Our approach is designed for courses with so many students that it is infeasible to interpret every distinct wrong answer manually. Instead, we label only the most frequent wrong answers with the misunderstandings that they indicate, then predict the misunderstandings associated with other wrong answers using statistical co-occurrence patterns. This tiered approach leverages a small amount of human labeling effort to seed an automated procedure that identifies misunderstandings in students. Our approach involves much less effort than inspecting all answers, substantially outperforms a baseline that does not take advantage of co-occurrence statistics, proves robust to different course sizes, and generalizes effectively across student cohorts.","PeriodicalId":20739,"journal":{"name":"Proceedings of the Third (2016) ACM Conference on Learning @ Scale","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86521736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}