Marti A. Hearst, A. Fox, Derrick Coetzee, Bjoern Hartmann
{"title":"All It Takes Is One: Evidence for a Strategy for Seeding Large Scale Peer Learning Interactions","authors":"Marti A. Hearst, A. Fox, Derrick Coetzee, Bjoern Hartmann","doi":"10.1145/2724660.2728698","DOIUrl":"https://doi.org/10.1145/2724660.2728698","url":null,"abstract":"The results of a study of online peer learning suggests that it may be advantageous to automatically assign students to small peer learning groups based on how many students initially get answers to questions correct.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73094551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ming Zhang, Jile Zhu, Yanzhen Zou, Hongfei Yan, Dan Hao, Chuxiong Liu
{"title":"Educational Evaluation in the PKU SPOC Course \"Data Structures and Algorithms\"","authors":"Ming Zhang, Jile Zhu, Yanzhen Zou, Hongfei Yan, Dan Hao, Chuxiong Liu","doi":"10.1145/2724660.2728666","DOIUrl":"https://doi.org/10.1145/2724660.2728666","url":null,"abstract":"In order to learn the impact of MOOCs, we conducted a SPOC experiment on the course of Data Structures and Algorithms in Peking University. In this paper, we analyze student online activities, test scores, and two surveys using statistical methods (t-test, analysis of variance, correlation analysis and OLS regression) to understand what factors will foster improvements in student learning. We find that the \"SPOC + Flipped\" is a helpful mode to teach algorithm, time spent on the course and students' confidence had a positive impact on learning effect, and SPOC resource should be made full use of.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76312879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Garvit Juniwal, Sakshi Jain, Alexandre Donzé, S. Seshia
{"title":"Clustering-Based Active Learning for CPSGrader","authors":"Garvit Juniwal, Sakshi Jain, Alexandre Donzé, S. Seshia","doi":"10.1145/2724660.2728702","DOIUrl":"https://doi.org/10.1145/2724660.2728702","url":null,"abstract":"In this work, we propose and evaluate an active learning algorithm in context of CPSGrader, an automatic grading and feedback generation tool for laboratory-based courses in the area of cyber-physical systems. CPSGrader detects the presence of certain classes of mistakes using test benches that are generated in part via machine learning from solutions that have the fault and those that do not (positive and negative examples). We develop a clustering-based active learning technique that selects from a large database of unlabeled solutions, a small number of reference solutions for the expert to label that will be used as training data. The goal is to achieve better accuracy of fault identification with fewer reference solutions as compared to random selection. We demonstrate the effectiveness of our algorithm using data obtained from an on-campus laboratory-based course at UC Berkeley.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88660775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chinmay Kulkarni, Michael S. Bernstein, Scott R. Klemmer
{"title":"PeerStudio: Rapid Peer Feedback Emphasizes Revision and Improves Performance","authors":"Chinmay Kulkarni, Michael S. Bernstein, Scott R. Klemmer","doi":"10.1145/2724660.2724670","DOIUrl":"https://doi.org/10.1145/2724660.2724670","url":null,"abstract":"Rapid feedback is a core component of mastery learning, but feedback on open-ended work requires days or weeks in most classes today. This paper introduces PeerStudio, an assessment platform that leverages the large number of students' peers in online classes to enable rapid feedback on in-progress work. Students submit their draft, give rubric-based feedback on two peers' drafts, and then receive peer feedback. Students can integrate the feedback and repeat this process as often as they desire. In MOOC deployments, the median student received feedback in just twenty minutes. Rapid feedback on in-progress work improves course outcomes: in a controlled experiment, students' final grades improved when feedback was delivered quickly, but not if delayed by 24 hours. More than 3,600 students have used PeerStudio in eight classes, both massive and in-person. This research demonstrates how large classes can leverage their scale to encourage mastery through rapid feedback and revision.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81347747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Learning for Learning at Scale","authors":"Peter Norvig","doi":"10.1145/2724660.2735205","DOIUrl":"https://doi.org/10.1145/2724660.2735205","url":null,"abstract":"There is great enthusiasm for the idea that massive amounts of data from online interactions of learners with material can lead to a rapid improvement cycle, driven by analysis of the data, experimentation, and intervention to do more of what works and less of what doesn't. This talk discusses techniques for working with massive amounts of data. Peter Norvig is a Director of Research at Google Inc. Previously he was head of Google's core search algorithms group, and of NASA Ames's Computational Sciences Division, making him NASA's senior computer scientist. He received the NASA Exceptional Achievement Award in 2001. He has taught at the University of Southern California and the University of California at Berkeley, from which he received a Ph.D. in 1986 and the distinguished alumni award in 2006. He was co-teacher of an Artificial Intelligence class that signed up 160,000 students, helping to kick off the current round of massive open online classes. His publications include the books Artificial Intelligence: A Modern Approach (the leading textbook in the field), Paradigms of AI Programming: Case Studies in Common Lisp, Verbmobil: A Translation System for Face-to-Face Dialog, and Intelligent Help Systems for UNIX. He is also the author of the Gettysburg Powerpoint Presentation and the world's longest palindromic sentence. He is a fellow of the AAAI, ACM, California Academy of Science and American Academy of Arts & Sciences.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83632060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learnersourcing of Complex Assessments","authors":"Piotr Mitros","doi":"10.1145/2724660.2728683","DOIUrl":"https://doi.org/10.1145/2724660.2728683","url":null,"abstract":"We present results from a pilot study where students successfully created complex assessments for a MOOC in introductory electronics -- an area with a very large expert-novice gap. Previous work in learnersourcing found that learners can productively contribute through simple tasks. However, many course resources require a high level of expertise to create, and prior work fell short on tasks with a large expert-novice gap, such as textbook creation or concept tagging. Since these constitute a substantial portion of course creation costs, addressing this issue is prerequisite to substantially shifting MOOC economics through learnersourcing. This represents one of the first successes in learnersourcing with a large expert-novice gap. In the pilot, we reached out to 206 students (out of thousands who met eligibility criteria) who contributed 14 complex high-quality design problems. This results suggests a full cohort could contribute hundreds of problems. We achieved this through a four-pronged approach: (1) pre-selecting top learners (2) community feedback process (3) student mini-course in pedagogy (4) instructor review and involvement.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"128 22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73040772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry Corrigan-Gibbs, Nakull Gupta, Curtis G. Northcutt, Edward Cutrell, W. Thies
{"title":"Measuring and Maximizing the Effectiveness of Honor Codes in Online Courses","authors":"Henry Corrigan-Gibbs, Nakull Gupta, Curtis G. Northcutt, Edward Cutrell, W. Thies","doi":"10.1145/2724660.2728663","DOIUrl":"https://doi.org/10.1145/2724660.2728663","url":null,"abstract":"We measure the effectiveness of a traditional honor code at deterring cheating in an online examination, and we compare it to that of a stern warning. Through experimental evaluation in a 409-student online course, we find that a pre-task warning leads to a significant decrease in the rate of cheating while an honor code has a smaller (non-significant) effect. Unlike much prior work, we measure the rate of cheating directly and we do not rely on potentially inaccurate post-examination surveys. Our findings demonstrate that replacing traditional honor codes with warnings could be a simple and effective way to deter cheating in online courses.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"70 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90579953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Automated Grading/Feedback System for 3-View Engineering Drawings using RANSAC","authors":"Y. Kwon, Sara McMains","doi":"10.1145/2724660.2724682","DOIUrl":"https://doi.org/10.1145/2724660.2724682","url":null,"abstract":"We propose a novel automated grading system that can compare two multiview engineering drawings consisting of three views that may have allowable translations, scales, and offsets, and can recognize frequent error types as well as individual drawing errors. We show that translation, scale, and offset-invariant comparison can be conducted by estimating the affine transformation for each individual view within drawings. Our system directly aims to evaluate students' skills creating multiview engineering drawings. Since it is important for our students to be familiar with widely used software such as AutoCAD, our system does not require a separate interface or environment, but directly grades the saved DWG/DXF files from AutoCAD. We show the efficacy of the proposed algorithm by comparing its results with human grading. Beyond the advantages of convenience and accuracy, based on our data set of students' answers, we can analyze the common errors of the class as a whole using our system.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88960493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mo Zhou, A. Cliff, Allen Huang, S. Krishnan, Brandie Nonnecke, Kanji Uchino, Samuelson Joseph, A. Fox, Ken Goldberg
{"title":"M-CAFE: Managing MOOC Student Feedback with Collaborative Filtering","authors":"Mo Zhou, A. Cliff, Allen Huang, S. Krishnan, Brandie Nonnecke, Kanji Uchino, Samuelson Joseph, A. Fox, Ken Goldberg","doi":"10.1145/2724660.2728681","DOIUrl":"https://doi.org/10.1145/2724660.2728681","url":null,"abstract":"Ongoing student feedback on course content and assignments can be valuable for MOOC instructors in the absence of face-to-face-interaction. To collect ongoing feedback and scalably identify valuable suggestions, we built the MOOC Collaborative Assessment and Feedback Engine (M-CAFE). This mobile platform allows MOOC students to numerically assess the course, their own performance, and provide textual suggestions about how the course could be improved on a weekly basis. M-CAFE allows students to visualize how they compare with their peers and read and evaluate what others have suggested, providing peer-to-peer collaborative filtering. We evaluate M-CAFE based on data from two EdX MOOCs.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89062984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher A. Brooks, Caren M. Stalburg, Tawanna R. Dillahunt, L. Robert
{"title":"Learn With Friends: The Effects of Student Face-to-Face Collaborations on Massive Open Online Course Activities","authors":"Christopher A. Brooks, Caren M. Stalburg, Tawanna R. Dillahunt, L. Robert","doi":"10.1145/2724660.2728667","DOIUrl":"https://doi.org/10.1145/2724660.2728667","url":null,"abstract":"This work investigates whether enrolling in a Massive Open Online Course (MOOC) with friends or colleagues can improve a learner's performance and social interaction during the course. Our results suggest that signing up for a MOOC with peers correlates positively with the rate of course completion, level of achievement, and discussion forum usage. Further analysis seems to suggest that a learner's interaction with their friends compliments a MOOC by acting as a form of self-blended learning.","PeriodicalId":20664,"journal":{"name":"Proceedings of the Second (2015) ACM Conference on Learning @ Scale","volume":"111 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79239837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}