{"title":"Turn-taking analysis of small group collaboration in an engineering discussion classroom","authors":"Robin Jephthah Rajarathinam, C. D'Angelo","doi":"10.1145/3576050.3576099","DOIUrl":"https://doi.org/10.1145/3576050.3576099","url":null,"abstract":"This preliminary study focuses on using voice activity detection (VAD) algorithms to extract turn information of small group work detected from recorded individual audio stream data from undergraduate engineering discussion sections. Video data along with audio were manually coded for collaborative behavior of students and teacher-student interaction. We found that individual audio data can be used to obtain features that can describe group work in noisy classrooms. We observed patterns in student turn taking and talk duration during various sections of the classroom which matched with the video coded data. Results show that high quality individual audio data can be effective in describing collaborative processes that occurs in the classroom. Future directions on using prosodic features and implications on how we can conceptualize collaborative group work using audio data are discussed.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116463341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated, content-focused feedback for a writing-to-learn assignment in an undergraduate organic chemistry course","authors":"Field M. Watts, Amber J. Dood, G. Shultz","doi":"10.1145/3576050.3576053","DOIUrl":"https://doi.org/10.1145/3576050.3576053","url":null,"abstract":"Writing-to-learn (WTL) pedagogy supports the implementation of writing assignments in STEM courses to engage students in conceptual learning. Recent studies in the undergraduate STEM context demonstrate the value of implementing WTL, with findings that WTL can support meaningful learning and elicit students’ reasoning. However, the need for instructors to provide feedback on students’ writing poses a significant barrier to implementing WTL; this barrier is especially notable in the context of introductory organic chemistry courses at large universities, which often have large enrollments. This work describes one approach to overcome this barrier by presenting the development of an automated feedback tool for providing students with formative feedback on their responses to an organic chemistry WTL assignment. This approach leverages machine learning models to identify features of students’ mechanistic reasoning in response to WTL assignments in a second-semester, introductory organic chemistry laboratory course. The automated feedback tool development was guided by a framework for designing automated feedback, theories of self-regulated learning, and the components of effective WTL pedagogy. Herein, we describe the design of the automated feedback tool and report our initial evaluation of the tool through pilot interviews with organic chemistry students.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128893426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Each Encounter Counts: Modeling Language Learning and Forgetting","authors":"B. Ma, G. Hettiarachchi, Sora Fukui, Yuji Ando","doi":"10.1145/3576050.3576062","DOIUrl":"https://doi.org/10.1145/3576050.3576062","url":null,"abstract":"Language learning applications usually estimate the learner’s language knowledge over time to provide personalized practice content for each learner at the optimal timing. However, accurately predicting language knowledge or linguistic skills is much more challenging than math or science knowledge, as many language tasks involve memorization and retrieval. Learners must memorize a large number of words and meanings, which are prone to be forgotten without practice. Although a few studies consider forgetting when modeling learners’ language knowledge, they tend to apply traditional models, consider only partial information about forgetting, and ignore linguistic features that may significantly influence learning and forgetting. This paper focuses on modeling and predicting learners’ knowledge by considering their forgetting behavior and linguistic features in language learning. Specifically, we first explore the existence of forgetting behavior and cross-effects in real-world language learning datasets through empirical studies. Based on these, we propose a model for predicting the probability of recalling a word given a learner’s practice history. The model incorporates key information related to forgetting, question formats, and semantic similarities between words using the attention mechanism. Experiments on two real-world datasets show that the proposed model improves performance compared to baselines. Moreover, the results indicate that combining multiple types of forgetting information and item format improves performance. In addition, we find that incorporating semantic features, such as word embeddings, to model similarities between words in a learner’s practice history and their effects on memory also improves the model.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128920024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jionghao Lin, Wei Dai, Lisa-Angelique Lim, Yi-Shan Tsai, R. F. Mello, Hassan Khosravi, D. Gašević, Guanliang Chen
{"title":"Learner-centred Analytics of Feedback Content in Higher Education","authors":"Jionghao Lin, Wei Dai, Lisa-Angelique Lim, Yi-Shan Tsai, R. F. Mello, Hassan Khosravi, D. Gašević, Guanliang Chen","doi":"10.1145/3576050.3576064","DOIUrl":"https://doi.org/10.1145/3576050.3576064","url":null,"abstract":"Feedback is an effective way to assist students in achieving learning goals. The conceptualisation of feedback is gradually moving from feedback as information to feedback as a learner-centred process. To demonstrate feedback effectiveness, feedback as a learner-centred process should be designed to provide quality feedback content and promote student learning outcomes on the subsequent task. However, it remains unclear how instructors adopt the learner-centred feedback framework for feedback provision in the teaching practice. Thus, our study made use of a comprehensive learner-centred feedback framework to analyse feedback content and identify the characteristics of feedback content among student groups with different performance changes. Specifically, we collected the instructors’ feedback on two consecutive assignments offered by an introductory to data science course at the postgraduate level. On the basis of the first assignment, we used the status of student grade changes (i.e., students whose performance increased and those whose performance did not increase on the second assignment) as the proxy of the student learning outcomes. Then, we engineered and extracted features from the feedback content on the first assignment using a learner-centred feedback framework and further examined the differences of these features between different groups of student learning outcomes. Lastly, we used the features to predict student learning outcomes by using widely-used machine learning models and provided the interpretation of predicted results by using the SHapley Additive exPlanations (SHAP) framework. We found that 1) most features from the feedback content presented significant differences between the groups of student learning outcomes, 2) the gradient boost tree model could effectively predict student learning outcomes, and 3) SHAP could transparently interpret the feature importance on predictions.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. M. Fahid, S. J. Lee, Bradford W. Mott, Jessica Vandenberg, Halim Acosta, T. Brush, Krista D. Glazewski, C. Hmelo‐Silver, James Lester
{"title":"Effects of Modalities in Detecting Behavioral Engagement in Collaborative Game-Based Learning","authors":"F. M. Fahid, S. J. Lee, Bradford W. Mott, Jessica Vandenberg, Halim Acosta, T. Brush, Krista D. Glazewski, C. Hmelo‐Silver, James Lester","doi":"10.1145/3576050.3576079","DOIUrl":"https://doi.org/10.1145/3576050.3576079","url":null,"abstract":"Collaborative game-based learning environments have significant potential for creating effective and engaging group learning experiences. These environments offer rich interactions between small groups of students by embedding collaborative problem solving within immersive virtual worlds. Students often share information, ask questions, negotiate, and construct explanations between themselves towards solving a common goal. However, students sometimes disengage from the learning activities, and due to the nature of collaboration, their disengagement can propagate and negatively impact others within the group. From a teacher's perspective, it can be challenging to identify disengaged students within different groups in a classroom as they need to spend a significant amount of time orchestrating the classroom. Prior work has explored automated frameworks for identifying behavioral disengagement. However, most prior work relies on a single modality for identifying disengagement. In this work, we investigate the effects of using multiple modalities to detect disengagement behaviors of students in a collaborative game-based learning environment. For that, we utilized facial video recordings and group chat messages of 26 middle school students while they were interacting with Crystal Island: EcoJourneys, a game-based learning environment for ecosystem science. Our study shows that the predictive accuracy of a unimodal model heavily relies on the modality of the ground truth, whereas multimodal models surpass the unimodal models, trading resources for accuracy. Our findings can benefit future researchers in designing behavioral engagement detection frameworks for assisting teachers in using collaborative game-based learning within their classrooms.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"38 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133111357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint Choice Time: A Metric for Better Understanding Collaboration in Interactive Museum Exhibits","authors":"M. Berland, Vishesh Kumar","doi":"10.1145/3576050.3576088","DOIUrl":"https://doi.org/10.1145/3576050.3576088","url":null,"abstract":"In this paper, we propose a new metric – Joint Choice Time (JCT) – to measure how and when visitors are collaborating around an interactive museum exhibit. This extends dwell time, one of the most commonly used metrics for museum engagement – which tends to be individual, and sacrifices insight into activity and learning details for measurement simplicity. We provide an exemplar of measuring JCT using a common “diversity metric” for collaborative choices and potential outcomes. We provide an implementable description of the metric, results from using the metric with our own data, and potential implications for designing museum exhibits and easily measuring social engagement. Here, we apply JCT to an interactive exhibit game called “Rainbow Agents” where museum visitors can play independently or work together to tend to a virtual garden using computer science concepts. Our data showed that diversity of meaningful choices positively correlated with both dwell time and diversity of positive and creative outcomes. JCT - as a productive as well as easy to access measure of social work - provides an example for learning analytics practitioners and researchers (especially in museums) to consider centering social engagement and work as a rich space for easily assessing effective learning experiences for museum visitors.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121320265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The current state of using learning analytics to measure and support K-12 student engagement: A scoping review","authors":"Melissa Bond, Olga Viberg, Nina Bergdahl","doi":"10.1145/3576050.3576085","DOIUrl":"https://doi.org/10.1145/3576050.3576085","url":null,"abstract":"Student engagement has been identified as a critical construct for understanding and predicting educational success. However, research has shown that it can be hard to align data-driven insights of engagement with observed and self-reported levels of engagement. Given the emergence and increasing application of learning analytics (LA) within K-12 education, further research is needed to understand how engagement is being conceptualized and measured within LA research. This scoping review identifies and synthesizes literature published between 2011-2022, focused on LA and student engagement in K-12 contexts, and indexed in five international databases. 27 articles and conference papers from 13 different countries were included for review. We found that most of the research was undertaken in middle school years within STEM subjects. The results show that there is a wide discrepancy in researchers’ understanding and operationalization of engagement and little evidence to suggest that LA improves learning outcomes and support. However, the potential to do so remains strong. Guidance is provided for future LA engagement research to better align with these goals.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125953363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyu Li, Lixiang Yan, Linxuan Zhao, Roberto Martínez-Maldonado, D. Gašević
{"title":"CVPE: A Computer Vision Approach for Scalable and Privacy-Preserving Socio-spatial, Multimodal Learning Analytics","authors":"Xinyu Li, Lixiang Yan, Linxuan Zhao, Roberto Martínez-Maldonado, D. Gašević","doi":"10.1145/3576050.3576145","DOIUrl":"https://doi.org/10.1145/3576050.3576145","url":null,"abstract":"Capturing data on socio-spatial behaviours is essential in obtaining meaningful educational insights into collaborative learning and teamwork in co-located learning contexts. Existing solutions, however, have limitations regarding scalability and practicality since they rely largely on costly location tracking systems, are labour-intensive, or are unsuitable for complex learning environments. To address these limitations, we propose an innovative computer-vision-based approach – Computer Vision for Position Estimation (CVPE) – for collecting socio-spatial data in complex learning settings where sophisticated collaborations occur. CVPE is scalable and practical with a fast processing time and only needs low-cost hardware (e.g., cameras and computers). The built-in privacy protection modules also minimise potential privacy and data security issues by masking individuals’ facial identities and provide options to automatically delete recordings after processing, making CVPE a suitable option for generating continuous multimodal/classroom analytics. The potential of CVPE was evaluated by applying it to analyse video data about teamwork in simulation-based learning. The results showed that CVPE extracted socio-spatial behaviours relatively reliably from video recordings compared to indoor positioning data. These socio-spatial behaviours extracted with CVPE uncovered valuable insights into teamwork when analysed with epistemic network analysis. The limitations of CVPE for effective use in learning analytics are also discussed.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122265755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pankaj Chejara, L. Prieto, M. Rodríguez-Triana, Reet Kasepalu, Adolfo Ruiz-Calleja, Shashi Kant Shankar
{"title":"How to Build More Generalizable Models for Collaboration Quality? Lessons Learned from Exploring Multi-Context Audio-Log Datasets using Multimodal Learning Analytics","authors":"Pankaj Chejara, L. Prieto, M. Rodríguez-Triana, Reet Kasepalu, Adolfo Ruiz-Calleja, Shashi Kant Shankar","doi":"10.1145/3576050.3576144","DOIUrl":"https://doi.org/10.1145/3576050.3576144","url":null,"abstract":"Multimodal learning analytics (MMLA) research for building collaboration quality estimation models has shown significant progress. However, the generalizability of such models is seldom addressed. In this paper, we address this gap by systematically evaluating the across-context generalizability of collaboration quality models developed using a typical MMLA pipeline. This paper further presents a methodology to explore modelling pipelines with different configurations to improve the generalizability of the model. We collected 11 multimodal datasets (audio and log data) from face-to-face collaborative learning activities in six different classrooms with five different subject teachers. Our results showed that the models developed using the often-employed MMLA pipeline degraded in terms of Kappa from Fair (.20 < Kappa < .40) to Poor (Kappa < .20) when evaluated across contexts. This degradation in performance was significantly ameliorated with pipelines that emerged as high-performing from our exploration of 32 pipelines. Furthermore, our exploration of pipelines provided statistical evidence that often-overlooked contextual data features improve the generalizability of a collaboration quality model. With these findings, we make recommendations for the modelling pipeline which can potentially help other researchers in achieving better generalizability in their collaboration quality estimation models.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114957855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wemerson Marinho, E. W. Clua, Luis Martí, Karla Marinho
{"title":"Predicting Item Response Theory Parameters Using Question Statements Texts","authors":"Wemerson Marinho, E. W. Clua, Luis Martí, Karla Marinho","doi":"10.1145/3576050.3576139","DOIUrl":"https://doi.org/10.1145/3576050.3576139","url":null,"abstract":"Recently, new Neural Language Models pre-trained on a massive corpus of texts are available. These models encode statistical features of the languages through their parameters, creating better word vector representations that allow the training of neural networks with smaller sample sets. In this context, we investigate the application of these models to predict Item Response Theory parameters in multiple choice questions. More specifically, we apply our models for the Brazilian National High School Exam (ENEM) questions using the text of their statements and propose a novel optimization target for regression: Item Characteristic Curve. The architecture employed could predict the difficulty parameter b of the ENEM 2020 and 2021 items with a mean absolute error of 70 points. Calculating the IRT score in each knowledge area of the exam for a sample of 100,000 students, we obtained a mean absolute below 40 points for all knowledge areas. Considering only the top quartile, the exam’s main target of interest, the average error was less than 30 points for all areas, being the majority lower than 15 points. Such performance allows predicting parameters on newly created questions, composing mock tests for student training, and analyzing their performance with excellent precision, dispensing with the need for costly item calibration pre-test step.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127662414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}