{"title":"Inequality","authors":"A. Franceschini, J. Sharkey, A. Beresford","doi":"10.1145/3330430.3333625","DOIUrl":"https://doi.org/10.1145/3330430.3333625","url":null,"abstract":"Online learning in STEM subjects requires an easy way to enter and automatically mark mathematical equations. Existing solutions did not meet our requirements, and therefore we developed Inequality, a new open-source system which works across all major browsers, supports both mouse and touch-based entry, and is usable by high school children and teachers. Inequality has been in use for over 2 years by about 20000 students and nearly 900 teachers as part of the Isaac online learning platform. In this paper we evaluate Inequality as an entry method, assess the flexibility of our approach, and the effect the system has on student behaviour. We prepared 343 questions which could be answered using either Inequality or a traditional method. Looking across over 472000 question attempts, we found that students were equally proficient at answering questions correctly with both entry methods. Moreover, students using Inequality required fewer attempts to arrive at the correct answer 73% of the time. In a detailed analysis of equation construction, we found that Inequality provides significant flexibility in the construction of mathematical expressions, accommodating different working styles. We expected students who first worked on paper before entering their answers would require fewer attempts than those who did not, however this was not the case (p = 0.0109). While our system is clearly usable, a user survey highlighted a number of issues which we have addressed in a subsequent update.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82994587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Master's at Scale: Five Years in a Scalable Online Graduate Degree","authors":"David A. Joyner, C. Isbell","doi":"10.1145/3330430.3333630","DOIUrl":"https://doi.org/10.1145/3330430.3333630","url":null,"abstract":"In 2014, Georgia Tech launched the first for-credit MOOC-based graduate degree program. In the five years since, the program has proven generally successful, enrolling over 14,000 unique students, and several other similar programs have followed in its footsteps. Existing research on the program has focused largely on details of individual classes; program-level research, however, has been scarce. In this paper, we delve into the program-level details of an at-scale Master's degree, from the story of its creation through the data generated by the program, including the numbers of applications, admissions, matriculations, and graduations; enrollment details including demographic information and retention patterns; trends in student grades and experience as compared to the on-campus student body; and alumni perceptions. Among our findings, we note that the program has stabilized at a retention rate of around 70%; that the program's growth has not slowed; that the program has not cannibalized its on-campus counterpart; and that the program has seen an upward trend in the number of women enrolled as well as a persistently higher number of underrepresented minorities than the on-campus program. Throughout this analysis, we abstract out distinct lessons that should inform the development and growth of similar programs.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81803453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting the difficulty of automatic item generators on exams from their difficulty on homeworks","authors":"Binglin Chen, Matthew West, C. Zilles","doi":"10.1145/3330430.3333647","DOIUrl":"https://doi.org/10.1145/3330430.3333647","url":null,"abstract":"To design good assessments, it is useful to have an estimate of the difficulty of a novel exam question before running an exam. In this paper, we study a collection of a few hundred automatic item generators (short computer programs that generate a variety of unique item instances) and show that their exam difficulty can be roughly predicted from student performance on the same generator during pre-exam practice. Specifically, we show that the rate that students correctly respond to a generator on an exam is on average within 5% of the correct rate for those students on their last practice attempt. This study is conducted with data from introductory undergraduate Computer Science and Mechanical Engineering courses.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"NS30 12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89638158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developing an Intervention to Advance Learning At Scale","authors":"Samaa Haniya","doi":"10.1145/3330430.3333667","DOIUrl":"https://doi.org/10.1145/3330430.3333667","url":null,"abstract":"With the rise of technology advancements we witness every day in our contemporary life in general, and in the education field in specific, new ways of learning are emerging, such as Massive Open Online Courses (MOOCs). MOOCs have grown rapidly for the past few years, yet meeting the needs of massive and diverse learners and keeping them motivated to learn is still a challenge. To address this concern, we have developed an intervention to meet students' learning needs and keep them motivated to learn according to their capabilities. In this paper, we will discuss the intervention and report on the preliminary results drawing on the quantitative and qualitative data of the course survey to interpret learners experiences using this approach.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91440116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring Difficulty of Introductory Programming Tasks","authors":"Tomáš Effenberger, Jaroslav Čechák, Radek Pelánek","doi":"10.1145/3330430.3333641","DOIUrl":"https://doi.org/10.1145/3330430.3333641","url":null,"abstract":"Quantification of the difficulty of problem solving tasks has many applications in the development of adaptive learning systems, e.g., task sequencing, student modeling, and insight for content authors. There are, however, many potential conceptualizations and measures of problem difficulty and the computation of difficulty measures is influenced by biases in data collection. In this work, we explore difficulty measures for introductory programming tasks. The results provide insight into non-trivial behavior of even simple difficulty measures.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87514318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Hartman, S. Ng, Aishwarya Lakshminarasimhan, Thangamani Ramasamy, Melika Farahani, Chris Boesch
{"title":"Achievements for building a learning community","authors":"Kevin Hartman, S. Ng, Aishwarya Lakshminarasimhan, Thangamani Ramasamy, Melika Farahani, Chris Boesch","doi":"10.1145/3330430.3333672","DOIUrl":"https://doi.org/10.1145/3330430.3333672","url":null,"abstract":"Twice a year the National University of Singapore hosts computer programming events open to the nation's secondary, junior college, polytechnic and technical education students. To qualify for the live events, participants complete online programming activities during a month-long qualification phase open to all non-university students over the age of 12. The activities include game-based learning and traditional coding problems. During the past year, more than 1700 students participated in the two qualification phases and more than 200 students participated in the live events. At these events, students pair-program to test their programming abilities and showcase their coded creations in a tournament format. In the accompanying poster, we describe our work to build a community of intrinsically motivated learners and develop the technical infrastructure to support them both at scale during the qualification phase and live events. We conclude by detailing our plans for leveraging the community as a site for research on learning going forward.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74713143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Cheng, Bowen Yu, Siwei Fu, Jian Zhao, Brent J. Hecht, J. Konstan, L. Terveen, S. Yarosh, Haiyi Zhu
{"title":"Teaching UI Design at Global Scales: A Case Study of the Design of Collaborative Capstone Projects for MOOCs","authors":"H. Cheng, Bowen Yu, Siwei Fu, Jian Zhao, Brent J. Hecht, J. Konstan, L. Terveen, S. Yarosh, Haiyi Zhu","doi":"10.1145/3330430.3333635","DOIUrl":"https://doi.org/10.1145/3330430.3333635","url":null,"abstract":"Group projects are an essential component of teaching user interface (UI) design. We identified six challenges in transferring traditional group projects into the context of Massive Open Online Courses: managing dropout, avoiding free-riding, appropriate scaffolding, cultural and time zone differences, and establishing common ground. We present a case study of the design of a group project for a UI Design MOOC, in which we implemented technical tools and social structures to cope with the above challenges. Based on survey analysis, interviews, and team chat data from the students over a six-month period, we found that our socio-technical design addressed many of the obstacles that MOOC learners encountered during remote collaboration. We translate our findings into design implications for better group learning experiences at scale.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73551256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changyoon Lee, D. Han, Hyoungwook Jin, Alice H. Oh
{"title":"automaTA","authors":"Changyoon Lee, D. Han, Hyoungwook Jin, Alice H. Oh","doi":"10.1145/3330430.3333658","DOIUrl":"https://doi.org/10.1145/3330430.3333658","url":null,"abstract":"When online learners have questions that are related to a specific task, they often use Q&A boards instead of web search because they are looking for context-specific answers. While lecturers, teaching assistants, and other learners can provide context-specific answers on the Q&A boards, there is often a high response latency which can impede their learning. We present automaTA, a prototype that suggests context-specific answers to online learners' questions by capturing the context of the questions. Our solution is to automate the response generation with a human-machine mixed approach, where humans generate high-quality answers, and the human-generated responses are used to train an automated algorithm to provide context-specific answers. automaTA adopts this approach as a prototype in which it generates automated answers for function-related questions in an online programming course. We conduct two user studies with undergraduate and graduate students with little or no experience with Python and found the potential that automaTA can automatically provide answers to context-specific questions without a human instructor, at scale.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75707124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What do students at distance universities think about AI?","authors":"Wayne Holmes, S. Anastopoulou","doi":"10.1145/3330430.3333659","DOIUrl":"https://doi.org/10.1145/3330430.3333659","url":null,"abstract":"Algorithms, drawn from Artificial Intelligence (AI) technologies, are increasingly being used in distance education. However, currently little is known about the attitudes of distance education students to the benefits and risks associated with AI. For example, is AI broadly welcomed by distance education students, thought to be irrelevant, or disliked? Here, we present the initial findings of a survey of students from the UK's largest distance university as a first step towards addressing the question \"What do students at distance universities think about AI?\" Responses from the 222 contributors suggest that these students do expect AI to be beneficial for their future learning, with more respondents selecting potential benefits than selecting risks. Nonetheless, it is important to extend this exploratory study to students in other universities worldwide, and to other stakeholders.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76308674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging Skill Hierarchy for Multi-Level Modeling with Elo Rating System","authors":"M. Yudelson, Y. Rosen, S. Polyak, J. Torre","doi":"10.1145/3330430.3333645","DOIUrl":"https://doi.org/10.1145/3330430.3333645","url":null,"abstract":"In this paper, we are discussing the case of offering retired assessment items as practice problems for the purposes of learning in a system called ACT Academy. In contrast to computer-assisted learning platforms, where students consistently focus on small sets of skills they practice till mastery, in our case, students are free to explore the whole subject domain. As a result, they have significantly lower attempt counts per individual skill. We have developed and evaluated a student modeling approach that differs from traditional approaches to modeling skill acquisition by leveraging the hierarchical relations in the skill taxonomy used for indexing practice problems. Results show that when applied in systems like ACT Academy, this approach offers significant improvements in terms of predicting student performance.","PeriodicalId":20693,"journal":{"name":"Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81114605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}