J. Learn. Anal.Pub Date : 2023-03-12DOI: 10.18608/jla.2023.7983
M. Khalil, P. Prinsloo, Sharon Slade
{"title":"Fairness, Trust, Transparency, Equity, and Responsibility in Learning Analytics","authors":"M. Khalil, P. Prinsloo, Sharon Slade","doi":"10.18608/jla.2023.7983","DOIUrl":"https://doi.org/10.18608/jla.2023.7983","url":null,"abstract":"Learning analytics has the capacity to provide potential benefit to a wide range of stakeholders within a range of educational contexts. It can provide prompt support to students, facilitate effective teaching, highlight aspects of course content that might be adapted, and predict a range of possible outcomes, such as students registering for more appropriate courses, supporting students’ self-efficacy, or redesigning a course’s pedagogical strategy. It will do all these things based on the assumptions and rules that learning analytics developers set out. As such, learning analytics can exacerbate existing inequalities such as unequal access to support or opportunities based on (any combination of) race, gender, culture, age, socioeconomic status, etc., or work to overcome the impact of such inequalities on realizing student potential. In this editorial, we introduce several selected articles that explore the principles of fairness, equity, and responsibility in the context of learning analytics. We discuss existing research and summarize the papers within this special section to outline what is known, and what remains to be explored. This editorial concludes by celebrating the breadth of work set out here, but also by suggesting that there are no simple answers to ensuring fairness, trust, transparency, equity, and responsibility in learning analytics. More needs to be done to ensure that our mutual understanding of responsible learning analytics continues to be embedded in the learning analytics research and design practice.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129443816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-03-10DOI: 10.18608/jla.2023.7781
T. Cerratto Pargman, C. McGrath, Olga Viberg, Simon Knight
{"title":"New Vistas on Responsible Learning Analytics: A Data Feminist Perspective","authors":"T. Cerratto Pargman, C. McGrath, Olga Viberg, Simon Knight","doi":"10.18608/jla.2023.7781","DOIUrl":"https://doi.org/10.18608/jla.2023.7781","url":null,"abstract":"The focus of ethics in learning analytics (LA) frameworks and guidelines is predominantly on procedural elements of data management and accountability. Another, less represented focus is on the duty to act and LA as a moral practice. Data feminism as a critical theoretical approach to data science practices may offer LA research and practitioners a valuable lens through which to consider LA as a moral practice. This paper examines what data feminism can offer the LA community. It identifies critical questions for further developing and enabling a responsible stance in LA research and practice taking one particular case — algorithmic decision-making — as a point of departure.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126254045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-03-09DOI: 10.18608/jla.2021.7791
Egle Gedrimiene, I. Celik, K. Mäkitalo, H. Muukkonen
{"title":"Transparency and Trustworthiness in User Intentions to Follow Career Recommendations from a Learning Analytics Tool","authors":"Egle Gedrimiene, I. Celik, K. Mäkitalo, H. Muukkonen","doi":"10.18608/jla.2021.7791","DOIUrl":"https://doi.org/10.18608/jla.2021.7791","url":null,"abstract":"Transparency and trustworthiness are among the key requirements for the ethical use of learning analytics (LA) and artificial intelligence (AI) in the context of social inclusion and equity. However, research on these issues pertaining to users is lacking, leaving it unclear as to how transparent and trustworthy current LA tools are for their users and how perceptions of these variables relate to user behaviour. In this study, we investigate user experiences of an LA tool in the context of career guidance, which plays a crucial role in supporting nonlinear career pathways for individuals. We review the ethical challenges of big data, AI, and LA in connection to career guidance and analyze the user experiences (N = 106) of the LA career guidance tool, which recommends study programs and institutions to users. Results indicate that the LA career guidance tool was evaluated as trustworthy but not transparent. Accuracy was found to be a stronger predictor for the intention to follow on the recommendations of the LA guidance tool than was understanding the origins of the recommendation. The user’s age emerged as an important factor in their assessment of transparency. We discuss the implications of these findings and suggest emphasizing accuracy in the development of LA tools for career guidance.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128472474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-03-08DOI: 10.18608/jla.2023.7807
M. Meaney, T. Fikes
{"title":"The Promise of MOOCs Revisited? Demographics of Learners Preparing for University","authors":"M. Meaney, T. Fikes","doi":"10.18608/jla.2023.7807","DOIUrl":"https://doi.org/10.18608/jla.2023.7807","url":null,"abstract":"This paper leverages cluster analysis to provide insight into how traditionally underrepresented learners engage with entry-level massive open online courses (MOOCs) intended to lower the barrier to university enrolment, produced by a major research university in the United States. From an initial sample of 260,239 learners, we cluster analyze a subset of data from 29,083 participants who submitted an assignment in one of nine entry-level MOOC courses. Manhattan distance and Gower distance measures are computed based on engagement, achievement, and demographic data. To our knowledge, this marks one of the first such uses of Gower distance to cluster mixed-variable data to explore fairness and equity in the MOOC literature. The clusters are derived from CLARA and PAM algorithms, enriched by demographic data, with a particular focus on education level, as well as approximated socioeconomic status (SES) for a smaller subset of learners. Results indicate that learners without a college degree are more likely to be high-performing compared to college-educated learners. Learners from lower SES backgrounds are just as likely to be successful as learners from middle and higher SES backgrounds. While MOOCs have struggled to improve access to learning, more fair and equitable outcomes for traditionally underrepresented learners are possible.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129334370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-03-07DOI: 10.18608/jla.2023.7801
Rianne Conijn, Patricia K. Kahr, Chris J. Snijders
{"title":"The Effects of Explanations in Automated Essay Scoring Systems on Student Trust and Motivation","authors":"Rianne Conijn, Patricia K. Kahr, Chris J. Snijders","doi":"10.18608/jla.2023.7801","DOIUrl":"https://doi.org/10.18608/jla.2023.7801","url":null,"abstract":"Ethical considerations, including transparency, play an important role when using artificial intelligence (AI) in education. Explainable AI has been coined as a solution to provide more insight into the inner workings of AI algorithms. However, carefully designed user studies on how to design explanations for AI in education are still limited. The current study aimed to identify the effect of explanations of an automated essay scoring system on students’ trust and motivation. The explanations were designed using a needs-elicitation study with students in combination with guidelines and frameworks of explainable AI. Two types of explanations were tested: full-text global explanations and an accuracy statement. The results showed that both explanations did not have an effect on student trust or motivation compared to no explanations. Interestingly, the grade provided by the system, and especially the difference between the student’s self-estimated grade and the system grade, showed a large influence. Hence, it is important to consider the effects of the outcome of the system (here: grade) when considering the effect of explanations of AI in education.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133097112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-03-05DOI: 10.18608/jla.2023.7795
C. Patterson, Emily York, D. Maxham, Rudy Molina, Paul Mabrey
{"title":"Applying a Responsible Innovation Framework in Developing an Equitable Early Alert System:: A Case Study","authors":"C. Patterson, Emily York, D. Maxham, Rudy Molina, Paul Mabrey","doi":"10.18608/jla.2023.7795","DOIUrl":"https://doi.org/10.18608/jla.2023.7795","url":null,"abstract":"The anticipation, inclusion, responsiveness, and reflexivity (AIRR) framework (Stilgoe et al., 2013) is a novel framework that has helped those in science and technology fields shift their focus from products to the processes used to create those products. However, the framework has not been known to be applied to the development and implementation of data analytics in higher education. In a case study of creating an early-alert retention system at James Madison University, a working group of ~20 faculty, staff, and students creatively utilized the AIRR framework. The present study discusses how the AIRR framework was utilized to observe and enhance group processes, and the outcomes of those enhanced processes.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115761336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-03-05DOI: 10.18608/jla.2023.7813
Shea Swauger, R. Kalir
{"title":"Learning Analytics and the Abolitionist Imagination","authors":"Shea Swauger, R. Kalir","doi":"10.18608/jla.2023.7813","DOIUrl":"https://doi.org/10.18608/jla.2023.7813","url":null,"abstract":"This article advances an abolitionist reframing of learning analytics (LA) that explores the benefits of productive disorientation, considers potential harms and care made possible by LA, and suggests the abolitionist imagination as an important educational practice. By applying abolitionist concepts to LA, we propose it may be feasible to open new critiques and social futures that build toward equity-oriented LA design and implementation. We introduce speculative methods to advance three vignettes imagining how LA could be weaponized against students or transformed into a justice-directed learning tool. Our speculative methods aim to destabilize where power in LA has been routinely located and contested, thereby opening new lines of inquiry about more equitable educational prospects. Our concluding discussion addresses how speculative design and fiction are complementary methods to the abolitionist imagination and can be pragmatic tools to help build a world with fairer, more equitable, and responsible LA technologies.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128981733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-03-05DOI: 10.18608/jla.2023.7681
Wenyi Lu, Joe Griffin, T. Sadler, J. Laffey, S. Goggins
{"title":"Serious Game Analytics by Design: Feature Generation and Selection Using Game Telemetry and Game Metrics: Toward Predictive Model Construction","authors":"Wenyi Lu, Joe Griffin, T. Sadler, J. Laffey, S. Goggins","doi":"10.18608/jla.2023.7681","DOIUrl":"https://doi.org/10.18608/jla.2023.7681","url":null,"abstract":"The construction of prediction models reflecting players’ learning performance in serious games currently faces various challenges for learning analytics. In this study, we design, implement, and field test a learning analytics system for a serious game, advancing the field by explicitly showing which in-game features correspond to differences in learner performance. We then deploy and test a system that provides instructors with clear signals regarding student learning and progress in the game, which instructors could depend upon for interventions. Within the study, we examined, coded, and filtered a substantial gameplay corpus, determining expertise in the game. Mission HydroSci (MHS) is a serious game that teaches middle-school students water science. Using our logging system, designed and implemented along with game design and development, we captured around 60 in-game features from the gameplay of 373 students who completed Unit 3 of MHS in its first field test. We tested eight hypotheses during the field test and presented this paper’s results to participating teachers. Our findings reveal several features with statistical significance that will be critical for creating a validated prediction model. We discuss how this work will help future research establish a framework for designing analytics systems for serious games and advancing gaming design and analytics theory.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"83 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127973925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-02-28DOI: 10.18608/jla.2023.7787
Hakeoung Hannah Lee, Emma C. Gargroetzi
{"title":"\"It's like a double-edged sword\": Mentor Perspectives on Ethics and Responsibility in a Learning Analytics-Supported Virtual Mentoring Program","authors":"Hakeoung Hannah Lee, Emma C. Gargroetzi","doi":"10.18608/jla.2023.7787","DOIUrl":"https://doi.org/10.18608/jla.2023.7787","url":null,"abstract":"Data-driven learning analytics (LA) exploits artificial intelligence, data-mining, and emerging technologies, rapidly expanding the collection and uses of learner data. Considerations of potential harm and ethical implications have not kept pace, raising concerns about ethical and privacy issues (Holstein & Doroudi, 2019; Prinsloo & Slade, 2018). This empirical study contributes to a growing critical conversation on fairness, equity, and responsibility of LA lending mentor voices in the context of an online mentorship program through which undergraduate students mentored secondary school students. Specifically, this study responds to a phenomenon shared by four mentors who recounted hiding from mentees that they had seen their LA data. Interviews reveal the convergent and divergent ideas of mentors regarding LA in terms of 1) affordances and constraints, 2) scope and boundaries, 3) ethical tensions and dilemmas, 4) paradoxical demands, and 5) what constitutes fairness, equity, and responsibility. The analysis integrates mentor voices with Slade and Prinsloo’s (2013) principles for an ethical framework for LA, Hacking’s (1982, 1986) dynamic nominalism, and Levinas’s (1989) ethics of responsibility. Design recommendations derived from mentor insights are extended in a discussion of ethical relationality, troubling learners as data-subjects, and considering the possibilities of the agency, transparency, and choice in LA system design.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129743842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Learn. Anal.Pub Date : 2023-02-28DOI: 10.18608/jla.2023.7775
Rebecca E. Heiser, M. E. D. Stritto, Allen E. Brown, Benjamin Croft
{"title":"Amplifying Student and Administrator Perspectives on Equity and Bias in Learning Analytics:: Alone Together in Higher Education","authors":"Rebecca E. Heiser, M. E. D. Stritto, Allen E. Brown, Benjamin Croft","doi":"10.18608/jla.2023.7775","DOIUrl":"https://doi.org/10.18608/jla.2023.7775","url":null,"abstract":"When higher education institutions (HEIs) have the potential to collect large amounts of learner data, it is important to consider the spectrum of stakeholders involved with and impacted by the use of learning analytics. This qualitative research study aims to understand the degree of concern with issues of bias and equity in the uses of learner data as perceived by students, diversity and inclusion leaders, and senior administrative leaders in HEIs. An interview study was designed to investigate stakeholder voices that generate, collect, and utilize learning analytics from eight HEIs in the United States. A phased inductive coding analysis revealed similarities and differences in the three stakeholder groups regarding concerns about bias and equity in the uses of learner data. The study findings suggest that stakeholders have varying degrees of data literacy, thus creating conditions of inequality and bias in learning data. By centring the values of these critical stakeholder groups and acknowledging that intersections and hierarchies of power are critical to authentic inclusion, this study provides additional insight into proactive measures that institutions could take to improve equity, transparency, and accountability in their responsible learning analytics efforts.","PeriodicalId":145357,"journal":{"name":"J. Learn. Anal.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129593067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}