Vanessa Echeverría, Marisol Wong-Villacrés, X. Ochoa, K. Chiluiza
{"title":"An Exploratory Evaluation of a Collaboration Feedback Report","authors":"Vanessa Echeverría, Marisol Wong-Villacrés, X. Ochoa, K. Chiluiza","doi":"10.1145/3506860.3506890","DOIUrl":"https://doi.org/10.1145/3506860.3506890","url":null,"abstract":"Providing formative feedback to foster collaboration and improve students’ practice has been an emerging topic in CSCL and LA research communities. However, this pedagogical practice could be unrealistic in authentic classrooms, as observing and annotating improvements for every student and group exceeds the teacher’s capabilities. In the research area of group work and collaborative learning, current learning analytics solutions have reported accurate computational models to understand collaboration processes, yet evaluating formative collaboration feedback, where the final user is the student, is an under-explored research area. This paper reports an exploratory evaluation to understand the effects a collaboration feedback report through an authentic study conducted in regular classes. Fifty students from a Computer Science undergraduate program participated in the study. We followed an user-centered design approach to define six collaboration aspects that are relevant to students. These aspects were part of initial prototypes for the feedback report. From the exploratory intervention, we did not find effects between students who received the feedback (experimental condition) report and those who did not (control condition). Finally, this paper discusses design implications for further feedback report designs and interventions.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133327881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Programming Knowledge Tracing by Interacting Programming Skills and Student Code","authors":"Mengxia Zhu, Siqi Han, Peisen Yuan, Xuesong Lu","doi":"10.1145/3506860.3506870","DOIUrl":"https://doi.org/10.1145/3506860.3506870","url":null,"abstract":"Programming education has received extensive attention in recent years due to the increasing demand for programming ability in almost all industries. Educational institutions have widely employed online judges for programming training, which can help teachers automatically assess programming assignments by executing students’ code with test cases. However, a more important teaching process with online judges should be to evaluate how students master each of the programming skills such as strings or pointers, so that teachers may give personalized feedback and help them proceed to the success more efficiently. Previous studies have adopted deep models of knowledge tracing to evaluate a student’s mastery level of skills during the interaction with programming exercises. However, existing models generally follow the conventional assumption of knowledge tracing that each programming exercise requires only one skill, whereas in practice a programming exercise usually inspects the comprehensive use of multiple skills. Moreover, the feature of student code is often simply concatenated with other input features without the consideration of its relationship with the inspected programming skills. To bridge the gap, we propose a simple attention-based approach to learn from student code the features reflecting the multiple programming skills inspected by each programming exercise. In particular, we first use a program embedding method to obtain the representations of student code. Then we use the skill embeddings of each programming exercise to query the embeddings of student code and form an aggregated hidden state representing how the inspected skills are used in the student code. We combine the learned hidden state with DKT (Deep Knowledge Tracing), an LSTM (Long Short-Term Memory)-based knowledge tracing model, and show the improvements over baseline model. We point out some possible directions to improve the current work.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133796775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angela M. Zavaleta Bernuy, Ziwen Han, Hammad Shaikh, Qiming Zheng, Lisa-Angelique Lim, Anna N. Rafferty, Andrew Petersen, J. J. Williams
{"title":"How can Email Interventions Increase Students’ Completion of Online Homework? A Case Study Using A/B Comparisons","authors":"Angela M. Zavaleta Bernuy, Ziwen Han, Hammad Shaikh, Qiming Zheng, Lisa-Angelique Lim, Anna N. Rafferty, Andrew Petersen, J. J. Williams","doi":"10.1145/3506860.3506874","DOIUrl":"https://doi.org/10.1145/3506860.3506874","url":null,"abstract":"Email communication between instructors and students is ubiquitous, and it could be valuable to explore ways of testing out how to make email messages more impactful. This paper explores the design space of using emails to get students to plan and reflect on starting weekly homework earlier. We deployed a series of email reminders using randomized A/B comparisons to test alternative factors in the design of these emails, providing examples of an experimental paradigm and metrics for a broader range of interventions. We also surveyed and interviewed instructors and students to compare their predictions about the effectiveness of the reminders with their actual impact. We present our results on which seemingly obvious predictions about effective emails are not borne out, despite there being evidence for further exploring these interventions, as they can sometimes motivate students to attempt their homework more often. We also present qualitative evidence about student opinions and behaviours after receiving the emails, to guide further interventions. These findings provide insight into how to use randomized A/B comparisons in everyday channels such as emails, to provide empirical evidence to test our beliefs about the effectiveness of alternative design choices.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117251202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do Gender and Race Matter? Supporting Help-Seeking with Fair Peer Recommenders in an Online Algebra Learning Platform","authors":"Chenglu Li, Wanli Xing, W. Leite","doi":"10.1145/3506860.3506869","DOIUrl":"https://doi.org/10.1145/3506860.3506869","url":null,"abstract":"Discussion forums are important for students’ knowledge inquiry in online contexts, with help-seeking being an essential learning strategy in discussion forums. This study aimed to explore innovative methods to build a peer recommender that can provide fair and accurate intelligence to support help-seeking in online learning. Specifically, we have examined existing network embedding models, Node2Vec and FairWalk, to benchmark with the proposed fair network embedding (Fair-NE). A dataset of 187,450 post-reply pairs by 10,182 Algebra I students from 2015 to 2020 was sampled from Algebra Nation, an online algebra learning platform. The dataset was used to train and evaluate the engines of peer recommenders. We evaluated models with representation fairness, predictive accuracy, and predictive fairness. Our findings suggest that constructing fairness-aware models in learning analytics (LA) is crucial to tackling the potential bias in data and to creating trustworthy LA systems.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128833300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Challenges of using auto-correction tools for language learning","authors":"Sylvio Rüdian, Moritz Dittmeyer, Niels Pinkwart","doi":"10.1145/3506860.3506867","DOIUrl":"https://doi.org/10.1145/3506860.3506867","url":null,"abstract":"In language learning, getting corrective feedback for writing tasks is an essential didactical concept to improve learners' language skills. Although various tools for automatic correction do exist, open writing texts still need to be corrected manually by teachers to provide helpful feedback to learners. In this paper, we explore the usefulness of an auto-correction tool in the context of language learning. In the first step, we compare the corrections of 100 learner texts suggested by a correction tool with those done by human teachers and examine the differences. In a second step, we do a qualitative analysis, where we investigate the requirements that need to be tackled to make existing proofreading tools useful for language learning. The results reveal that the aim of enhancing texts by proofreading, in general, is quite different from the purpose of providing corrective feedback in language learning. Only one of four relevant errors (recall=.26) marked by human teachers is recorded correctly by the tool, whereas many expressions thought to be faulty by the tool are sometimes no errors at all (precision=.33). We provide and discuss the challenges that need to be addressed to adjust those tools for language learning.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129068209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparison of Learning Analytics Frameworks: a Systematic Review","authors":"M. Khalil, P. Prinsloo, Sharon Slade","doi":"10.1145/3506860.3506878","DOIUrl":"https://doi.org/10.1145/3506860.3506878","url":null,"abstract":"While learning analytics frameworks precede the official launch of learning analytics in 2011, there has been a proliferation of learning analytics frameworks since. This systematic review of learning analytics frameworks between 2011 and 2021 in three databases resulted in an initial corpus of 268 articles and conference proceeding papers based on the occurrence of “learning analytics” and “framework” in titles, keywords and abstracts. The final corpus of 46 frameworks were analysed using a coding scheme derived from purposefully selected learning analytics frameworks. The results found that learning analytics frameworks share a number of elements and characteristics such as source, development and application focus, a form of representation, data sources and types, focus and context. Less than half of the frameworks consider student data privacy and ethics. Finally, while design and process elements of these frameworks may be transferable and scalable to other contexts, users in different contexts will be best-placed to determine their transferability/scalability.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127187042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unpacking Instructors’ Analytics Use: Two Distinct Profiles for Informing Teaching","authors":"Qiujie Li, Yeonji Jung, Bernice d'Anjou, A. Wise","doi":"10.1145/3506860.3506905","DOIUrl":"https://doi.org/10.1145/3506860.3506905","url":null,"abstract":"This study addresses the gap in knowledge about differences in how instructors use analytics to inform teaching by examining the ways that thirteen college instructors engaged with a set of university-provided analytics. Using multiple walk-through interviews with the instructors and qualitative inductive coding, two profiles of instructor analytics use were identified that were distinct from each other in terms of the goals of analytics use, how instructors made sense of and took actions upon the analytics, and the ways that ethical concerns were conceived. Specifically, one group of instructors used analytics to help students get aligned to and engaged in the course, whereas the other group used analytics to align the course to meet students’ needs. Instructors in both profiles saw ethical questions as central to their learning analytics use, with instructors in one profile focusing on transparency and the other on student privacy and agency. These findings suggest the need to view analytics use as an integrated component of instructor teaching practices and envision complementary sets of technical and pedagogical support that can best facilitate the distinct activities aligned with each profile.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130636973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Educational Explainable Recommender Usage and its Effectiveness in High School Summer Vacation Assignment","authors":"Kyosuke Takami, Yiling Dai, B. Flanagan, H. Ogata","doi":"10.1145/3506860.3506882","DOIUrl":"https://doi.org/10.1145/3506860.3506882","url":null,"abstract":"Explainable recommendations, which provide explanations about why an item is recommended, help to improve the transparency, persuasiveness, and trustworthiness. However, few research in educational technology utilize explainable recommendations. We developed an explanation generator using the parameters from Bayesian knowledge tracing models. We used this educational explainable recommendation system to investigate the effects of explanation on the summer vacation assignment for high school students. Comparing the click counts of recommended quizzes with and without explanations, we found that the number of clicks was significantly higher for quizzes with explanations. Furthermore, system usage pattern mining revealed that students can be divided to three clusters— none, steady and late users. In the cluster of steady users, recommended quizzes with explanations were continuously used. These results suggest the effectiveness of an explainable recommendation system in the field of education.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122269905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Too Fast for Their Own Good: Analyzing a Decade of Student Exercise Responses to Explore the Impact of Math Solving Photo Apps","authors":"Jay Sloan-Lynch, Nathanael Gay, R. Watkins","doi":"10.1145/3506860.3506868","DOIUrl":"https://doi.org/10.1145/3506860.3506868","url":null,"abstract":"The introduction of math solving photo apps in late 2014 presented students with a tempting new way to solve math problems quickly and accurately. Despite widespread acknowledgement that students increasingly use these apps to complete their coursework, as well as growing concerns about cheating as more students learn online, the prevalence and impact of this technology remains largely unexplored. This study uses a large dataset consisting of 700 unique math exercises and over 82 million student submissions to investigate changes in exercise answering speeds during the last decade. Through a series of exploratory analyses, we identify dramatic shifts in exercise submission speed distributions in recent years, with increasing numbers of rapid responses suggesting growing student reliance on math solving photo technology to answer math problems on homework and exams. Our analyses also reveal that decreases in exercise answering speeds have occurred contemporaneously with the introduction and proliferation of math solving photo apps in education and we further substantiate the role of these tools by verifying that exercise susceptibility to math solving photo apps is associated with decreases in submission speeds. We discuss potential applications of our findings to improve math assessment design and support students in adopting better learning strategies.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128558359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Socio-Semantic Network Motifs Framework for Discourse Analysis","authors":"Bodong Chen, Xinran Zhu, Hong Shui","doi":"10.1145/3506860.3506893","DOIUrl":"https://doi.org/10.1145/3506860.3506893","url":null,"abstract":"Effective collaborative discourse requires both cognitive and social engagement of students. To investigate complex socio-cognitive dynamics in collaborative discourse, this paper proposes to model collaborative discourse as a socio-semantic network (SSN) and then use network motifs – defined as recurring, significant subgraphs – to characterize the network and hence the discourse. To demonstrate the utility of our SSN motifs framework, we applied it to a sample dataset. While more work needs to be done, the SSN motifs framework shows promise as a novel, theoretically informed approach to discourse analysis.","PeriodicalId":185465,"journal":{"name":"LAK22: 12th International Learning Analytics and Knowledge Conference","volume":"270 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123115804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}