J. McPherson, H. L. Tong, S. Fatt, Danny Y. T. Liu
{"title":"Student perspectives on data provision and use: starting to unpack disciplinary differences","authors":"J. McPherson, H. L. Tong, S. Fatt, Danny Y. T. Liu","doi":"10.1145/2883851.2883945","DOIUrl":"https://doi.org/10.1145/2883851.2883945","url":null,"abstract":"How can we best align learning analytics practices with disciplinary knowledge practices in order to support student learning? Although learning analytics itself is an interdisciplinary field, it tends to take a 'one-size-fits-all' approach to the collection, measurement, and reporting of data, overlooking disciplinary knowledge practices. In line with a recent trend in higher education research, this paper considers the contribution of a realist sociology of education to the field of learning analytics, drawing on findings from recent student focus groups at an Australian university. It examines what learners say about their data needs with reference to organizing principles underlying knowledge practices within their disciplines. The key contribution of this paper is a framework that could be used as the basis for aligning the provision and/or use of data in relation to curriculum, pedagogy, and assessment with disciplinary knowledge practices. The framework extends recent research in Legitimation Code Theory, which understands disciplinary differences in terms of the principles that underpin knowledge-building. The preliminary analysis presented here both provides a tool for ensuring a fit between learning analytics practices and disciplinary practices and standards for achievement, and signals disciplinarity as an important consideration in learning analytics practices.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114445797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving efficacy attribution in a self-directed learning environment using prior knowledge individualization","authors":"Z. Pardos, Yanbo Xu","doi":"10.1145/2883851.2883949","DOIUrl":"https://doi.org/10.1145/2883851.2883949","url":null,"abstract":"Models of learning in EDM and LAK are pushing the boundaries of what can be measured from large quantities of historical data. When controlled randomization is present in the learning platform, such as randomized ordering of problems within a problem set, natural quasi-randomized controlled studies can be conducted, post-hoc. Difficulty and learning gain attribution are among factors of interest that can be studied with secondary analyses under these conditions. However, much of the content that we might like to evaluate for learning value is not administered as a random stimulus to students but instead is being self-selected, such as a student choosing to seek help in the discussion forums, wiki pages, or other pedagogically relevant material in online courseware. Help seekers, by virtue of their motivation to seek help, tend to be the ones who have the least knowledge. When presented with a cohort of students with a bi-modal or uniform knowledge distribution, this can present problems with model interpretability when a single point estimation is used to represent cohort prior knowledge. Since resource access is indicative of a low knowledge student, a model can tend towards attributing the resources with low or negative learning gain in order to better explain performance given the higher average prior point estimate. In this paper we present several individualized prior strategies and demonstrate how learning efficacy attribution validity and prediction accuracy improve as a result. Level of education attained, relative past assessment performance, and the prior per student cold start heuristic were employed and compared as prior knowledge individualization strategies.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115077811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanam Shirazi Beheshitha, M. Hatala, D. Gašević, Srécko Joksimovíc
{"title":"The role of achievement goal orientations when studying effect of learning analytics visualizations","authors":"Sanam Shirazi Beheshitha, M. Hatala, D. Gašević, Srécko Joksimovíc","doi":"10.1145/2883851.2883904","DOIUrl":"https://doi.org/10.1145/2883851.2883904","url":null,"abstract":"When designing learning analytics tools for use by learners we have an opportunity to provide tools that consider a particular learner's situation and the learner herself. To afford actual impact on learning, such tools have to be informed by theories of education. Particularly, educational research shows that individual differences play a significant role in explaining students' learning process. However, limited empirical research in learning analytics has investigated the role of theoretical constructs, such as motivational factors, that are underlying the observed differences between individuals. In this work, we conducted a field experiment to examine the effect of three designed learning analytics visualizations on students' participation in online discussions in authentic course settings. Using hierarchical linear mixed models, our results revealed that effects of visualizations on the quantity and quality of messages posted by students with differences in achievement goal orientations could either be positive or negative. Our findings highlight the methodological importance of considering individual differences and pose important implications for future design and research of learning analytics visualizations.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122579720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applying classification techniques on temporal trace data for shaping student behavior models","authors":"Z. Papamitsiou, E. Karapistoli, A. Economides","doi":"10.1145/2883851.2883926","DOIUrl":"https://doi.org/10.1145/2883851.2883926","url":null,"abstract":"Differences in learners' behavior have a deep impact on their educational performance. Consequently, there is a need to detect and identify these differences and build suitable learner models accordingly. In this paper, we report on the results from an alternative approach for dynamic student behavioral modeling based on the analysis of time-based student-generated trace data. The goal was to unobtrusively classify students according to their time-spent behavior. We applied 5 different supervised learning classification algorithms on these data, using as target values (class labels) the students' performance score classes during a Computer-Based Assessment (CBA) process, and compared the obtained results. The proposed approach has been explored in a study with 259 undergraduate university participant students. The analysis of the findings revealed that a) the low misclassification rates are indicative of the accuracy of the applied method and b) the ensemble learning (treeBagger) method provides better classification results compared to the others. These preliminary results are encouraging, indicating that a time-spent driven description of the students' behavior could have an added value towards dynamically reshaping the respective models.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127218538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards triggering higher-order thinking behaviors in MOOCs","authors":"Xu Wang, Miaomiao Wen, C. Rosé","doi":"10.1145/2883851.2883964","DOIUrl":"https://doi.org/10.1145/2883851.2883964","url":null,"abstract":"With the aim of better scaffolding discussion to improve learning in a MOOC context, this work investigates what kinds of discussion behaviors contribute to learning. We explored whether engaging in higher-order thinking behaviors results in more learning than paying general or focused attention to course materials. In order to evaluate whether to attribute the effect to engagement in the associated behaviors versus persistent characteristics of the students, we adopted two approaches. First, we used propensity score matching to pair students who exhibit a similar level of involvement in other course activities. Second, we explored individual variation in engagement in higher-order thinking behaviors across weeks. The results of both analyses support the attribution of the effect to the behavioral interpretation. A further analysis using LDA applied to course materials suggests that more social oriented topics triggered richer discussion than more biopsychology oriented topics.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115899877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing students' intentionality towards badges within a case study using Khan academy","authors":"José A. Ruipérez Valiente, P. Merino, C. D. Kloos","doi":"10.1145/2883851.2883947","DOIUrl":"https://doi.org/10.1145/2883851.2883947","url":null,"abstract":"One of the most common gamification techniques in education is the use of badges as a reward for making specific student actions. We propose two indicators to gain insight about students' intentionality towards earning badges and use them with data from 291 students interacting with Khan Academy courses. The intentionality to earn badges was greater for repetitive badges, and this can be related to the fact that these are easier to achieve. We provide the general distribution of students depending on these badge indicators, obtaining different profiles of students which can be used for adaptation purposes.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122662524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sequencing educational content in classrooms using Bayesian knowledge tracing","authors":"Y. B. David, A. Segal, Y. Gal","doi":"10.1145/2883851.2883885","DOIUrl":"https://doi.org/10.1145/2883851.2883885","url":null,"abstract":"Despite the prevalence of e-learning systems in schools, most of today's systems do not personalize educational data to the individual needs of each student. This paper proposes a new algorithm for sequencing questions to students that is empirically shown to lead to better performance and engagement in real schools when compared to a baseline approach. It is based on using knowledge tracing to model students' skill acquisition over time, and to select questions that advance the student's learning within the range of the student's capabilities, as determined by the model. The algorithm is based on a Bayesian Knowledge Tracing (BKT) model that incorporates partial credit scores, reasoning about multiple attempts to solve problems, and integrating item difficulty. This model is shown to outperform other BKT models that do not reason about (or reason about some but not all) of these features. The model was incorporated into a sequencing algorithm and deployed in two classes in different schools where it was compared to a baseline sequencing algorithm that was designed by pedagogical experts. In both classes, students using the BKT sequencing approach solved more difficult questions and attributed higher performance than did students who used the expert-based approach. Students were also more engaged using the BKT approach, as determined by their interaction time and number of log-ins to the system, as well as their reported opinion. We expect our approach to inform the design of better methods for sequencing and personalizing educational content to students that will meet their individual learning needs.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125323935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Hu, Jason Ip, Koossulraj Sadaful, George Lui, S. Chu
{"title":"Wikiglass: a learning analytic tool for visualizing collaborative wikis of secondary school students","authors":"Xiao Hu, Jason Ip, Koossulraj Sadaful, George Lui, S. Chu","doi":"10.1145/2883851.2883966","DOIUrl":"https://doi.org/10.1145/2883851.2883966","url":null,"abstract":"This demo presents Wikiglass, a learning analytic tool for visualizing the statistics and timelines of collaborative Wikis built by secondary school students during their group project in inquiry-based learning. The tool adopts a modular structure for the flexibility of reuse with different data sources. The client side is built with the Model-View-Controller framework and the AngularJS library whereas the server side manages the database and data sources. The tool is currently used by secondary teachers in Hong Kong and is undergoing evaluation and improvement.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125376218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher A. Brooks, Craig D. S. Thompson, Vitomir Kovanovíc
{"title":"Introduction to data mining for educational researchers","authors":"Christopher A. Brooks, Craig D. S. Thompson, Vitomir Kovanovíc","doi":"10.1145/2883851.2883879","DOIUrl":"https://doi.org/10.1145/2883851.2883879","url":null,"abstract":"The goal of this tutorial is to share data mining tools and techniques used by computer scientists with educational social scientists. We broadly define educational social scientists as being made up of people with backgrounds in the learning sciences, cognitive psychology, and educational research. The learning analytics community is heavily populated with researchers of these backgrounds, and we believe those that find themselves at the intersection of research, theory, and practice have a particular interest in expanding their knowledge of datadriven tools and techniques.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125892777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using A/B testing in MOOC environments","authors":"Jan Renz, Daniel Hoffmann, T. Staubitz, C. Meinel","doi":"10.1145/2883851.2883876","DOIUrl":"https://doi.org/10.1145/2883851.2883876","url":null,"abstract":"In recent years, Massive Open Online Courses (MOOCs) have become a phenomenon offering the possibility to teach thousands of participants simultaneously. In the same time the platforms used to deliver these courses are still in their fledgling stages. While course content and didactics of those massive courses are the primary key factors for the success of courses, still a smart platform may increase or decrease the learners experience and his learning outcome. The paper at hand proposes the usage of an A/B testing framework that is able to be used within an micro-service architecture to validate hypotheses about how learners use the platform and to enable data-driven decisions about new features and settings. To evaluate this framework three new features (Onboarding Tour, Reminder Mails and a Pinboard Digest) have been identified based on a user survey. They have been implemented and introduced on two large MOOC platforms and their influence on the learners behavior have been measured. Finally this paper proposes a data driven decision workflow for the introduction of new features and settings on e-learning platforms.","PeriodicalId":343844,"journal":{"name":"Proceedings of the Sixth International Conference on Learning Analytics & Knowledge","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130669699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}