{"title":"Blending Peer Instruction with Just-In-Time Teaching: Jointly Optimal Task Scheduling with Feedback for Classroom Flipping","authors":"Jingting Li, Lin Ling, Chee-Wei Tan","doi":"10.1145/3430895.3460134","DOIUrl":"https://doi.org/10.1145/3430895.3460134","url":null,"abstract":"Blended learning often requires alternating between asynchronous pre-class and synchronous in-class activities using online technologies to enhance the overall learning experience. Subject to constraints on desired learning outcome specifications and individual student preference, can we jointly optimize pre-class and in-class tasks to improve the two-way interaction between students and the instructor? We leverage ideas of self-assessment in Just-In-Time Teaching and Peer Instruction to propose an optimization-theoretic framework to analyze the optimal trade-off between the time invested in two different learning tasks for each individual student. We show that the problem can be formulated as a linear program, which can be efficiently solved to determine the optimal amount of time for pre-class and in-class learning. We develop a mobile chatbot software integrated with feedback data analytics to blend asynchronous pre-class quiz assessment together with the synchronous in-class poll-quiz routine of Peer Instruction to achieve classroom flipping that can be used for remote and hybrid teaching and learning.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126095240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rachel Van Campenhout, Noam Brown, Bill Jerome, Jeffrey S. Dittel, Benny G. Johnson
{"title":"Toward Effective Courseware at Scale: Investigating Automatically Generated Questions as Formative Practice","authors":"Rachel Van Campenhout, Noam Brown, Bill Jerome, Jeffrey S. Dittel, Benny G. Johnson","doi":"10.1145/3430895.3460162","DOIUrl":"https://doi.org/10.1145/3430895.3460162","url":null,"abstract":"Courseware is a comprehensive learning environment that engages students in a learning by doing approach while also giving instructors data-driven insights on their class, providing a scalable solution for many instructional models. However, courseware-and the volume of formative questions required to make it effective-is time-consuming and expensive to create. By using artificial intelligence for automatic question generation, we can reduce the time and cost of developing formative questions in courseware. However, it is critical that automatically generated (AG) questions have a level of quality on par with human-authored (HA) questions in order to be confident in their usage at scale. Therefore, our research question is: are student interactions with AG questions equivalent to HA questions with respect to engagement, difficulty, and persistence metrics? This paper evaluates data for AG and HA questions that students used as formative practice in their university Communication course. Analysis of AG and HA questions shows that our first generation of AG questions perform equally well as HA questions in multiple important respects.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115550316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuya Asano, Madhurima Dutta, Trisha Thakur, Jaemarie Solyst, Stephanie Cristea, Helena Jovic, Andrew Petersen, J. Williams
{"title":"Exploring Additional Personalized Support While Attempting Exercise Problems in Online Learning Platforms","authors":"Yuya Asano, Madhurima Dutta, Trisha Thakur, Jaemarie Solyst, Stephanie Cristea, Helena Jovic, Andrew Petersen, J. Williams","doi":"10.1145/3430895.3460145","DOIUrl":"https://doi.org/10.1145/3430895.3460145","url":null,"abstract":"In online asynchronous learning environments, students are assigned exercises, but it is not clear how to incorporate the kinds of actions an in-person tutor might take such as explaining, providing more practice, prompting for reflection, and motivating. We explore approaches to adding \"Drop-Downs'' that appear after a student submits an answer and that contain additional information to support learning. We conducted randomized A/B experiments exploring the impact of these Drop-Downs on student learning in the online portion of a flipped CS1 course. The deployed Drop-Downs in this course provided explanations, reflective prompts, additional problems, and motivational messages. The results suggest that students benefit from various Drop-Downs in different contexts, indicating the possibility of personalizing content based on the student's state. We discuss the resulting design implications of Drop-Downs in online learning systems.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116027908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personalization at Scale: Making Learning Personally Relevant in a Climate Science MOOC","authors":"Ido Roll, Ilana Ram, S. Harris","doi":"10.1145/3430895.3460154","DOIUrl":"https://doi.org/10.1145/3430895.3460154","url":null,"abstract":"Personalization and choice in learning activities can increase student engagement, satisfaction, and learning gains. But does this effect hold when implemented at scale? The current work explores the effects of personalization and learner's choice in a Climate Science MOOC. We manipulated these by creating two versions of course assignments. Learners who completed the assignments (N=219) received either Generic assignments focusing on global climate issues or Personalized assignments in which learners explored their own regions. Following the manipulation, learners in the Personalization group reported equal understanding of both Global and Local climate issues while learners in the Generic group reported better understanding of global issues and reduced understanding of local issues. Further, personalization did not affect interest or assignment length. We describe opportunities for personalization at scale and discuss their outcomes.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129115265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rialy Andriamiseza, Franck Silvestre, J. Parmentier, J. Broisin
{"title":"Data-informed Decision-making in TEFA Processes: An Empirical Study of a Process Derived from Peer-Instruction","authors":"Rialy Andriamiseza, Franck Silvestre, J. Parmentier, J. Broisin","doi":"10.1145/3430895.3460153","DOIUrl":"https://doi.org/10.1145/3430895.3460153","url":null,"abstract":"When formative assessment involves a large number of learners, Technology-Enhanced Formative Assessments are one of the most popular solutions. However, current TEFA processes lack data-informed decision-making. By analyzing a dataset gathered from a formative assessment tool, we provide evidence about how to improve decision-making in processes that ask learners to answer the same question before and after a confrontation with peers. Our results suggest that learners' understanding increases when the proportion of correct answers before the confrontation is close to 50%, or when learners consistently rate peers' rationales. Furthermore, peer ratings are more consistent when learners' confidence degrees are consistent. These results led us to design a decision-making model whose benefits will be studied in future works.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"22 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132974395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Analytics Dashboard Research Has Neglected Diversity, Equity and Inclusion","authors":"K. Williamson, René F. Kizilcec","doi":"10.1145/3430895.3460160","DOIUrl":"https://doi.org/10.1145/3430895.3460160","url":null,"abstract":"Learning analytic dashboards (LADs) have become more prevalent in higher education to help students, faculty, and staff make data-informed decisions. Despite extensive research on the design and usability of LADs, few studies have examined them in relation to issues of diversity, equity, and inclusion. We conducted a critical literature review to address three research questions: How does LAD research contribute to improving diversity, equity, and inclusion? How might LADs contribute to maintaining or exacerbating inequitable outcomes? And what future opportunities exist in this research space? Our review showed little use of LADs to address or improve issues of diversity, equity, and inclusion in the literature thus far. We argue that excluding these issues from LAD research is not an isolated oversight and it risks reinforcing existing inequities within the higher education system. We argue that LADs can be designed, researched, and deployed intentionally to advance equitable outcomes and help dismantle inequities in education. We highlight opportunities for future LAD research to address issues of diversity, equity, and inclusion.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131510476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jim Goodell, Aaron M. Kessler, Mary Ellen Wiltrout, Monika Avello
{"title":"Learning Engineering @ Scale","authors":"Jim Goodell, Aaron M. Kessler, Mary Ellen Wiltrout, Monika Avello","doi":"10.1145/3430895.3460875","DOIUrl":"https://doi.org/10.1145/3430895.3460875","url":null,"abstract":"Scaled learning requires a novel set of practices on the part of professionals developing and delivering systems of scaled learning. According to IEEE's Industry Connections Industry Consortium for Learning Engineering (ICICLE), \"Learning engineering is a process and practice that applies the learning sciences, using human-centered engineering design methodologies, and data-informed decision-making to support learners and their development[1].\" This workshop is designed as a boot camp for learning engineering. It is structured around a series of micro-learning activities around this definition. Topics include: Learning engineering is a process Learning engineering applies the learning sciences Learning engineering is human centered Learning engineering is engineering Learning engineering is data-driven Learning engineering is a team sport Learning engineering is ethical These learning experiences are organized based on and include materials from the forthcoming book the Learning Engineering Toolkit[2]. This workshop will also give conference participants an opportunity to give input into models for scaling the profession of learning engineering and formative feedback for iterative development of the Learning Engineering Toolkit.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"501 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123419103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scaling Up Data Science Course Projects: A Case Study","authors":"B. Bhavya, Jinfeng Xiao, ChengXiang Zhai","doi":"10.1145/3430895.3460168","DOIUrl":"https://doi.org/10.1145/3430895.3460168","url":null,"abstract":"Large-scale, online Data Science (DS) courses and degree programs are becoming increasingly common due to the global rise in popularity and demand for data scientists. Although project-based learning is integral to gaining hands-on experience in DS education, providing fair, timely, and high-quality feedback on varied projects for a large number of diverse students is challenging. To address those challenges in scaling up the assessment of DS group projects, we integrated multiple techniques, such as rapid feedback, peer grading, graders as meta-reviewers, etc. We present a case study of deploying those strategies for group projects in a large online DS course titled Text Information Systems offered in Fall, 2020. We synthesize our findings from analyzing student and grader survey responses, and share useful lessons and future work.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124904805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anastasios Ntourmas, Y. Dimitriadis, S. Daskalaki, N. Avouris
{"title":"Classification of Discussions in MOOC Forums: An Incremental Modeling Approach","authors":"Anastasios Ntourmas, Y. Dimitriadis, S. Daskalaki, N. Avouris","doi":"10.1145/3430895.3460137","DOIUrl":"https://doi.org/10.1145/3430895.3460137","url":null,"abstract":"Supervised classification models are commonly used for classifying discussions in a MOOC forum. In most cases these models require a tedious process for manual labeling the forum messages as training data. So, new methods are needed to reduce the human effort necessary for the preparation of such training datasets. In this study we follow an incremental approach in order to examine how soon after the beginning of a new course, we have collected enough data for training a supervised classification model. We show that by employing features that derive from a seeded topic modeling method, we achieve classifiers with reliable performance early enough in the course life, thus reducing significantly the human effort. The content of the MOOC platform is used to bias the topic extraction towards discussions related to (a) course content, (b) logistics, or (c) social interactions. Then, we develop a supervised model at the start of each week based on the topic features of all previous weeks and evaluate its performance in classifying the discussions for the rest of the course. Our approach was implemented in three different MOOCs of different subjects and different sizes. The findings reveal that supervised models are able to perform reliably quite early in a MOOC's life and retain a steady overall accuracy across the remaining weeks, without requiring to be trained with the entire forum dataset.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127135175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Pérez-Sanagustín, Ronald Pérez-Álvarez, Jorge Maldonado-Mahauad, Esteban Villalobos, Isabel Hilliger, Josefina Hernandez, Diego Sapunar, Pedro Manuel Moreno-Marcos, P. Muñoz-Merino, C. D. Kloos, J. Marín
{"title":"Can Feedback based on Predictive Data Improve Learners' Passing Rates in MOOCs? A Preliminary Analysis","authors":"M. Pérez-Sanagustín, Ronald Pérez-Álvarez, Jorge Maldonado-Mahauad, Esteban Villalobos, Isabel Hilliger, Josefina Hernandez, Diego Sapunar, Pedro Manuel Moreno-Marcos, P. Muñoz-Merino, C. D. Kloos, J. Marín","doi":"10.1145/3430895.3460991","DOIUrl":"https://doi.org/10.1145/3430895.3460991","url":null,"abstract":"This work in progress paper investigates if timely feedback increases learners' passing rate in a MOOC. An experiment conducted with 2,421 learners in the Coursera platform tests if weekly messages sent to groups of learners with the same probability of dropping out the course can improve retention. These messages can contain information about: (1) the average time spent in the course, or (2) the average time per learning session, or (3) the exercises performed, or (4) the video-lectures completed. Preliminary results show that the completion rate increased 12% with the intervention compared with data from 1,445 learners that participated in the same course in a previous session without the intervention. We discuss the limitations of these preliminary results and the future research derived from them.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}