J. Jovanović, D. Gašević, A. Pardo, S. Dawson, A. Whitelock-Wainwright
{"title":"Introducing meaning to clicks: Towards traced-measures of self-efficacy and cognitive load","authors":"J. Jovanović, D. Gašević, A. Pardo, S. Dawson, A. Whitelock-Wainwright","doi":"10.1145/3303772.3303782","DOIUrl":"https://doi.org/10.1145/3303772.3303782","url":null,"abstract":"The use of learning trace data together with various analytical methods has proven successful in detecting patterns in learning behaviour, identifying student profiles, and clustering learning resources. However, interpretation of the findings is often difficult and uncertain due to a lack of contextual data (e.g., data on student motivation, emotion or curriculum design). In this study we explored the integration of student self-reports about cognitive load and self-efficacy into the learning process and collection of relevant students' perceptions as learning traces. Our objective was to examine the association of traced measures of relevant learning constructs (cognitive load and self-efficacy) with i) indicators of the students' learning behaviour derived from trace data, and ii) the students' academic performance. The results indicated the presence of association between some indicators of students' engagement with learning activities and traced measures of cognitive load and self-efficacy. Correlational analysis demonstrated significant positive correlation between the students' course performance and traced measures of cognitive load and self-efficacy.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124654586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Top Concept Networks of Professional Education Reflections","authors":"A. Wise, Yi Cui","doi":"10.1145/3303772.3303840","DOIUrl":"https://doi.org/10.1145/3303772.3303840","url":null,"abstract":"This study explores the application of computational techniques to extract information about dental students' developing conceptions of their profession from digital reflective journal entries. Top concept networks were created for two cohorts of students at the beginning and end of their four-year program. A shift from a collection of general notions about becoming a professional to a more integrated, patient-centered conceptualization was found for both cohorts. The two groups initially differed in their perception of dental school (a mechanism for being able to work as a dentist versus a place to learn the skills to serve patients well) and subsequently in the extent of attention they paid to the feelings of their patients and themselves, as well as the continual growth of skill after graduation. Several useful linguistic markers were identified for examining these same issues in other cohorts. The results suggest that top concept networks can offer a useful window into students' developing conceptions of their profession. This kind of information can support student success on a macro level by offering feedback on existing curricula / informing learning designs to cultivate desired conceptions, and on a micro level through identifying particular ways individuals align with and diverge from the common trajectories.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114211707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Timeliness Deviation: A novel Approach to Evaluate Educational Recommender Systems for Closed-Courses","authors":"Christopher Krauss, A. Merceron, S. Arbanowski","doi":"10.1145/3303772.3303774","DOIUrl":"https://doi.org/10.1145/3303772.3303774","url":null,"abstract":"The decision on what item to learn next in a course can be supported by a recommender system (RS), which aims at making the learning process more efficient and effective. However, learners and learning activities frequently change over time. The question is: how are timely appropriate recommendations of learning resources actually evaluated and how can they be compared? Researchers have found that, in addition to a standardized dataset definition, there is also a lack of standardized definitions of evaluation procedures for RS in the area of Technology Enhanced Learning. This paper argues that, in a closed-course setting, a time-dependent split into the training set and test set is more appropriate than the usual cross-validation to evaluate the Top-N recommended learning resources at various points in time. Moreover, a new measure is introduced to determine the timeliness deviation between the point in time of an item recommendation and the point in time of the actual access by the user. Different recommender algorithms, including two novel ones, are evaluated with the time-dependent evaluation framework and the results, as well as the appropriateness of the framework, are discussed.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126336526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony F. Botelho, A. Varatharaj, E. V. Inwegen, N. Heffernan
{"title":"Refusing to Try: Characterizing Early Stopout on Student Assignments","authors":"Anthony F. Botelho, A. Varatharaj, E. V. Inwegen, N. Heffernan","doi":"10.1145/3303772.3303806","DOIUrl":"https://doi.org/10.1145/3303772.3303806","url":null,"abstract":"A prominent issue faced by the education research community is that of student attrition. While large research efforts have been devoted to studying course-level attrition, widely referred to as dropout, less research has been focused on finer-grained assignment-level attrition commonly observed in K-12 classrooms. This later instantiation of attrition, referred to in this paper as \"stopout,\" is characterized by students failing to complete their assigned work, but the cause of such behavior are not often known. This becomes a large problem for educators and developers of learning platforms as students who give up on assignments early are missing opportunities to learn and practice the material which may affect future performance on related topics; similarly, it is difficult for researchers to develop, and subsequently difficult for computer-based systems to deploy interventions aimed at promoting productive persistence once a student has ceased interaction with the software. This difficulty highlights the importance to understand and identify early signs of stopout behavior in order to provide aid to students preemptively to promote productive persistence in their learning. While many cases of student stopout may be attributable to gaps in student knowledge and indicative of struggle, student attributes such as grit and persistence may be further affected by other factors. This work focuses on identifying different forms of stopout behavior in the context of middle school math by observing student behaviors at the sub-problem level. We find that students exhibit disproportionate stopout on the first problem of their assignments in comparison to stopout on subsequent problems, identifying a behavior that we call \"refusal,\" and use the emerging patterns of student activity to better understand the potential causes underlying stopout behavior early in an assignment.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134448607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of Ranking and Rating Scales in Online Peer Assessment: Simulation Approach","authors":"Dmytro Babik, S. Stevens, Andrew E. Waters","doi":"10.1145/3303772.3303820","DOIUrl":"https://doi.org/10.1145/3303772.3303820","url":null,"abstract":"This study examines fidelity of ranking and rating scales in the context of online peer review and assessment. Using the Monte-Carlo simulation technique, we demonstrated that rating scales outperform ranking scales in revealing the relative \"true\" latent quality of the peer-assessed artifacts via the observed aggregate peer assessment scores. Our analysis focused on a simple, single-round peer assessment process and took into account peer assessment network topology, network size, the number of assessments per artifact, and the correlation statistics used. This methodology allows to separate the effects of structural components of peer assessment from cognitive effects.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132793903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Square it up!: How to model step duration when predicting student performance","authors":"Irene-Angelica Chounta, Paulo F. Carvalho","doi":"10.1145/3303772.3303827","DOIUrl":"https://doi.org/10.1145/3303772.3303827","url":null,"abstract":"In this paper, we explore how we can model students' response times to predict student performance in Intelligent Tutoring Systems. Related research suggests that response time can provide information with respect to correctness. However, time is not consistently used when modeling students' performance. Here, we build on previous work that indicated that the relationship between response time and student performance is non-linear. Based on this concept, we compare three models: a standard Additive Factors Analysis Model (AFM), an AFM model enhanced with a linear step duration parameter and an AFM model enhanced with a quadratic, step duration parameter. The results of this comparison show that the AFM model that is enhanced with the quadratic step duration parameter outperforms the other models over four different datasets and for most of the metrics we used to evaluate the models in cross validation and prediction.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130625107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UPM","authors":"Asmaa Elbadrawy, G. Karypis","doi":"10.1145/3303772.3303799","DOIUrl":"https://doi.org/10.1145/3303772.3303799","url":null,"abstract":"Identifying enrollment patterns associated with course success can help educators design better degree plans, and students make informed decisions about future enrollments. While discriminating pattern mining techniques can be used to address this problem, course enrollment patterns include sequence and quantity (grades) information. None of the existing methods were designed to account for both factors. In this work we present UPM, a Universal discriminating Pattern Mining framework that simultaneously mines various types of enrollment patterns while accounting for sequence and quantity using an expansion-specific approach. Unlike the existing methods, UPM expands a given pattern with an item by finding a minimum-entropy split over the item's quantities. We then use UPM to extract discriminating enrollment patterns from the high and the low performing student groups. These patterns can be utilized by educators for degree planning. To evaluate the quality of the extracted patterns, we adopt a supervised classification approach where we apply various classification techniques to label students according tho their performance based on the extracted patterns. Our evaluation shows that the classification accuracies obtained using the UPM extracted patterns are higher than the accuracies obtained using patterns extracted by other techniques. Accuracy improves significantly for students with larger numbers of patterns. Moreover, expansion-specific quantitative mining leads to more accurate classifications than the methods that do not account for quantities (grades).","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121199537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comprehension Factor Analysis: Modeling student's reading behaviour: Accounting for reading practice in predicting students' learning in MOOCs","authors":"Khushboo Thaker, Paulo F. Carvalho, K. Koedinger","doi":"10.1145/3303772.3303817","DOIUrl":"https://doi.org/10.1145/3303772.3303817","url":null,"abstract":"Massive Open Online Courses (MOOCs) often incorporate lecture-based learning along with lecture notes, textbooks, and videos to students. Moreover, MOOCs also incorporate practice activities and quizzes. Student learning in MOOCs can be tracked and improved using state-of-the-art student modeling. Currently, this means employing conventional student models that are constructed around Intelligent Tutoring Systems (ITS). Traditional ITS systems only utilize students performance interactions (quiz, problem-solving or practice activities). Therefore, text interactions are entirely ignored while modeling students performance in MOOCs using these cognitive models. In this work, we propose a Comprehension Factor Analysis model (CFM) for online courses, which integrates student reading interactions in student models to track and predict learning outcomes. Our model evaluation shows that CFM outperforms state-of-the-art models in predicting students' performance in a MOOC. These models can help better student-wise adaptation in the context of MOOCs.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124900987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contextualizable Learning Analytics Design: A Generic Model and Writing Analytics Evaluations","authors":"A. Shibani, Simon Knight, S. B. Shum","doi":"10.1145/3303772.3303785","DOIUrl":"https://doi.org/10.1145/3303772.3303785","url":null,"abstract":"A major promise of learning analytics is that through the collection of large amounts of data we can derive insights from authentic learning environments, and impact many learners at scale. However, the context in which the learning occurs is important for educational innovations to impact student learning. In particular, for student-facing learning analytics systems like feedback tools to work effectively, they have to be integrated with pedagogical approaches and the learning design. This paper proposes a conceptual model to strike a balance between the concepts of generalizable scalable support and contextualized specific support by clarifying key elements that help to contextualize student-facing learning analytics tools. We demonstrate an implementation of the model using a writing analytics example, where the features, feedback and learning activities around the automated writing feedback tool are tuned for the pedagogical context and the assessment regime in hand, by co-designing them with the subject experts. The model can be employed for learning analytics to move from generalized support to meaningful contextualized support for enhancing learning.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121749573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Platform Analytics: A step towards Personalization and Adaptation in Education","authors":"Katerina Mangaroska, B. Vesin, M. Giannakos","doi":"10.1145/3303772.3303825","DOIUrl":"https://doi.org/10.1145/3303772.3303825","url":null,"abstract":"Learning analytics are used to track learners' progress and empower educators and learners to make well-informed data-driven decisions. However, due to the distributed nature of the learning process, analytics need to be combined to offer broader insights into learner's behavior and experiences. Consequently, this paper presents an architecture of a learning ecosystem, that integrates and utilizes cross-platform analytics. The proposed cross-platform architecture has been put into practice via a Java programming course. After a series of studies, a proof of concept was derived that shows how cross-platform analytics amplify the relevant analytics for the learning process. Such analytics could improve educators' and learners' understanding of their own actions and the environments in which learning occurs.","PeriodicalId":382957,"journal":{"name":"Proceedings of the 9th International Conference on Learning Analytics & Knowledge","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132833944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}