Robert G. Bodily, J. Kay, V. Aleven, I. Jivet, Dan Davis, Françeska Xhakaj, K. Verbert
{"title":"Open learner models and learning analytics dashboards: a systematic review","authors":"Robert G. Bodily, J. Kay, V. Aleven, I. Jivet, Dan Davis, Françeska Xhakaj, K. Verbert","doi":"10.1145/3170358.3170409","DOIUrl":"https://doi.org/10.1145/3170358.3170409","url":null,"abstract":"This paper aims to link student facing Learning Analytics Dashboards (LADs) to the corpus of research on Open Learner Models (OLMs), as both have similar goals. We conducted a systematic review of literature on OLMs and compared the results with a previously conducted review of LADs for learners in terms of (i) data use and modelling, (ii) key publication venues, (iii) authors and articles, (iv) key themes, and (v) system evaluation. We highlight the similarities and differences between the research on LADs and OLMs. Our key contribution is a bridge between these two areas as a foundation for building upon the strengths of each. We report the following key results from the review: in reports of new OLMs, almost 60% are based on a single type of data; 33% use behavioral metrics; 39% support input from the user; 37% have complex models; and just 6% involve multiple applications. Key associated themes include intelligent tutoring systems, learning analytics, and self-regulated learning. Notably, compared with LADs, OLM research is more likely to be interactive (81% of papers compared with 31% for LADs), report evaluations (76% versus 59%), use assessment data (100% versus 37%), provide a comparison standard for students (52% versus 38%), but less likely to use behavioral metrics, or resource use data (33% against 75% for LADs). In OLM work, there was a heightened focus on learner control and access to their own data.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134578515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linking students' timing of engagement to learning design and academic performance","authors":"Quan Nguyen, M. Huptych, B. Rienties","doi":"10.1145/3170358.3170398","DOIUrl":"https://doi.org/10.1145/3170358.3170398","url":null,"abstract":"In recent years, the connection between Learning Design (LD) and Learning Analytics (LA) has been emphasized by many scholars as it could enhance our interpretation of LA findings and translate them to meaningful interventions. Together with numerous conceptual studies, a gradual accumulation of empirical evidence has indicated a strong connection between how instructors design for learning and student behaviour. Nonetheless, students' timing of engagement and its relation to LD and academic performance have received limited attention. Therefore, this study investigates to what extent students' timing of engagement aligned with instructor learning design, and how engagement varied across different levels of performance. The analysis was conducted over 28 weeks using trace data, on 387 students, and replicated over two semesters in 2015 and 2016. Our findings revealed a mismatch between how instructors designed for learning and how students studied in reality. In most weeks, students spent less time studying the assigned materials on the VLE compared to the number of hours recommended by instructors. The timing of engagement also varied, from in advance to catching up patterns. High-performing students spent more time studying in advance, while low-performing students spent a higher proportion of their time on catching-up activities. This study reinforced the importance of pedagogical context to transform analytics into actionable insights.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134290617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A multi-dimensional analysis of writing flexibility in an automated writing evaluation system","authors":"L. Allen, A. Likens, D. McNamara","doi":"10.1145/3170358.3170404","DOIUrl":"https://doi.org/10.1145/3170358.3170404","url":null,"abstract":"The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill. However, the features of the task, learner, and educational context that influence this flexibility remain largely unknown. The current study extends this research by examining relations between linguistic flexibility, reading comprehension ability, and feedback in the context of an automated writing evaluation system. Students (n = 131) wrote and revised six essays in an automated writing evaluation system and were provided both summative and formative feedback on their writing. Additionally, half of the students had access to a spelling and grammar checker that provided lower-level feedback during the writing period. The results provide evidence for the fact that developing writers demonstrate linguistic flexibility across the essays that they produce. However, analyses also indicate that lower-level feedback (i.e., spelling and grammar feedback) have little to no impact on the properties of students' essays nor on their variability across prompts or drafts. Overall, the current study provides important insights into the role of flexibility in writing skill and develops a strong foundation on which to conduct future research and educational interventions.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131149998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen E. Fancsali, Guoguo Zheng, Yanyan Tan, Steven Ritter, Susan R. Berman, April Galyardt
{"title":"Using embedded formative assessment to predict state summative test scores","authors":"Stephen E. Fancsali, Guoguo Zheng, Yanyan Tan, Steven Ritter, Susan R. Berman, April Galyardt","doi":"10.1145/3170358.3170392","DOIUrl":"https://doi.org/10.1145/3170358.3170392","url":null,"abstract":"If we wish to embed assessment for accountability within instruction, we need to better understand the relative contribution of different types of learner data to statistical models that predict scores on assessments used for accountability purposes. The present work scales up and extends predictive models of math test scores from existing literature and specifies six categories of models that incorporate information about student prior knowledge, socio-demographics, and performance within the MATHia intelligent tutoring system. Linear regression and random forest models are learned within each category and generalized over a sample of 23,000+ learners in Grades 6, 7, and 8 over three academic years in Miami-Dade County Public Schools. After briefly exploring hierarchical models of this data, we discuss a variety of technical and practical applications, limitations, and open questions related to this work, especially concerning to the potential use of instructional platforms like MATHia as a replacement for time-consuming standardized tests.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131214949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conceptualizing co-enrollment: accounting for student experiences across the curriculum","authors":"M. Brown, R. DeMonbrun, Stephanie D. Teasley","doi":"10.1145/3170358.3170366","DOIUrl":"https://doi.org/10.1145/3170358.3170366","url":null,"abstract":"In this study, we develop and test three measures for conceptualizing the potential impact of co-enrollment in different courses on students' changing risk for academic difficulty in a focal course. Two of these measures, concurrent enrollment in at least one difficult course and academic difficulty in the prior week in courses other than the focal course, significantly increase students' odds of academic difficulty in the focal course in our models. Our results have implications for the designs of Early Warning Systems and suggest that academic planners consider the relationship between course co-enrollment and students' academic success.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128620074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graph-based visual topic dependency models: supporting assessment design and delivery at scale","authors":"K. Cooper, Hassan Khosravi","doi":"10.1145/3170358.3170418","DOIUrl":"https://doi.org/10.1145/3170358.3170418","url":null,"abstract":"Educational environments continue to rapidly evolve to address the needs of diverse, growing student populations, while embracing advances in pedagogy and technology. In this changing landscape ensuring the consistency among the assessments for different offerings of a course (within or across terms), providing meaningful feedback about students' achievements, and tracking students' progression over time are all challenging tasks, particularly at scale. Here, a collection of visual Topic Dependency Models (TDMs) is proposed to help address these challenges. It visualises the required topics and their dependencies at a course level (e.g., CS 100) and assessment achievement data at the classroom level (e.g., students in CS 100 Term 1 2016 Section 001) both at one point in time (static) and over time (dynamic). The collection of TDMs share a common, two-weighted graph foundation. An algorithm is presented to create a TDM (static achievement for a cohort). An open-source, proof of concept implementation of the TDMs is under development; the current version is described briefly in terms of its support for visualising existing (historical, test) and synthetic data generated on demand.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116981250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a data archiving solution for learning analytics","authors":"Sarah Taylor, Pablo Munguia","doi":"10.1145/3170358.3170415","DOIUrl":"https://doi.org/10.1145/3170358.3170415","url":null,"abstract":"Data solutions in the teaching and learning space are in need of pro-active innovations in data management, to ensure that systems for learning analytics can scale up to match the size of datasets now available. Here, we illustrate the scale at which a Learning Management System (LMS) accumulates data, and discuss the barriers to using this data for in-depth analyses. We illustrate the exponential growth of our LMS data to represent a single example dataset, and highlight the broader need for taking a pro-active approach to dimensional modelling in learning analytics, anticipating that common learning analytics questions will be computationally expensive, and that the most useful data structures for learning analytics will not necessarily follow those of the source dataset.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129005581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Methodological foundations for the measurement of learning in learning analytics","authors":"Sandra Milligan","doi":"10.1145/3170358.3170391","DOIUrl":"https://doi.org/10.1145/3170358.3170391","url":null,"abstract":"Learning analysts often claim to measure learning, but their work has attracted growing concern about whether or not the measures are sufficiently accurate, fair, reliable, and valid, with utility for educators and interpretable by them. This paper considers these issues in the light of practices of scholars in more established fields, educational measurement particularly. The focus is on what really matters about methodologies for measuring learning, including foundational assumptions about the nature of learning, what is understood by the term `measured', the criteria applied when assessing quality of data, the standards of proof required to establish validity, reliability, generalizability, utility and interpretability of findings, and assumptions about learners and learning underlying data modeling techniques used to abstract meaning from the data. This paper argues that, for learning analytics to take its place as a fully-fledged member of the learning sciences, it needs seriously to consider how to measure learning. Methodology crafted at the interface of measurement science and learning analytics may be of sufficient interest to create a new subfield of scholarship - dubbed here `metrilytics' - to make a distinctive contribution to the science of learning.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129274614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatima Harrak, François Bouchet, Vanda Luengo, P. Gillois
{"title":"Profiling students from their questions in a blended learning environment","authors":"Fatima Harrak, François Bouchet, Vanda Luengo, P. Gillois","doi":"10.1145/3170358.3170389","DOIUrl":"https://doi.org/10.1145/3170358.3170389","url":null,"abstract":"Automatic analysis of learners' questions can be used to improve their level and help teachers in addressing them. We investigated questions (N=6457) asked before the class by 1st year medicine/pharmacy students on an online platform, used by professors to prepare their on-site Q&A session. Our long-term objectives are to help professors in categorizing those questions, and to provide students with feedback on the quality of their questions. To do so, first we manually categorized students' questions, which led to a taxonomy then used for an automatic annotation of the whole corpus. We identified students' characteristics from the typology of questions they asked using K-Means algorithm over four courses. The students were clustered by the proportion of each question asked in each dimension of the taxonomy. Then, we characterized the clusters by attributes not used for clustering such as the students' grade, the attendance, the number and popularity of questions asked. Two similar clusters always appeared: a cluster (A), made of students with grades lower than average, attending less to classes, asking a low number of questions but which are popular; and a cluster (D), made of students with higher grades, high attendance, asking more questions which are less popular. This work demonstrates the validity and the usefulness of our taxonomy, and shows the relevance of this classification to identify different students' profiles.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131565060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Zee, Dan Davis, Nadira Saab, B. Giesbers, Jasper Ginn, F. V. D. Sluis, F. Paas, W. Admiraal
{"title":"Evaluating retrieval practice in a MOOC: how writing and reading summaries of videos affects student learning","authors":"T. Zee, Dan Davis, Nadira Saab, B. Giesbers, Jasper Ginn, F. V. D. Sluis, F. Paas, W. Admiraal","doi":"10.1145/3170358.3170382","DOIUrl":"https://doi.org/10.1145/3170358.3170382","url":null,"abstract":"Videos are often the core content in open online education, such as in Massive Open Online Courses (MOOCs). Students spend most of their time in a MOOC on watching educational videos. However, merely watching a video is a relatively passive learning activity. To increase the educational benefits of online videos, students could benefit from more actively interacting with the to-be-learned material. In this paper two studies (n = 13k) are presented which examined the educational benefits of two more active learning strategies: 1) Retrieval Practice tasks which asked students to shortly summarize the content of videos, and 2) Given Summary tasks in which the students were asked to read pre-written summaries of videos. Writing, as well as reading summaries of videos were positively related to quiz grades. Both interventions seemed to help students to perform better, but there was no apparent difference between the efficacy of these interventions. These studies show how the quality of online education can be improved by adapting course design to established approaches from the learning sciences.","PeriodicalId":437369,"journal":{"name":"Proceedings of the 8th International Conference on Learning Analytics and Knowledge","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134342900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}