Fatemeh Salehian Kia, Abelardo Pardo, S. Dawson, Heather O’Brien
{"title":"Exploring the relationship between personalized feedback models, learning design and assessment outcomes","authors":"Fatemeh Salehian Kia, Abelardo Pardo, S. Dawson, Heather O’Brien","doi":"10.1080/02602938.2022.2139351","DOIUrl":null,"url":null,"abstract":"Abstract The increasing use of technology in education has brought new opportunities for the systematic collection of student data. Analyzing technology-mediated trace data, for example, has enabled researchers to bring new insights into student learning processes and the factors involved to support learning and teaching. However, many of these learning analytic studies have drawn conclusions from limited data sets that are derived from a single course or program of study. This impacts the generalizability of noted outcomes and calls for research on larger institutional data sets. The institutional adoption and analysis of learning technology can provide deeper insights into a wide range of learning contexts in practice. This study focused on examining how instructors used the learning tool, OnTask, to provide personalized feedback for students in large classes. We collected usage data from 99 courses and 19,385 students to examine how the instructors customized feedback to different groups of students. The findings reveal that there is a significant association between the topics of feedback and students with different performance. The results also demonstrated that instructors most frequently provided feedback related to student assessment. The study emphasizes the importance of teacher and student feedback literacy for creating effective feedback loops.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"860 - 873"},"PeriodicalIF":4.1000,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessment & Evaluation in Higher Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/02602938.2022.2139351","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 2
Abstract
Abstract The increasing use of technology in education has brought new opportunities for the systematic collection of student data. Analyzing technology-mediated trace data, for example, has enabled researchers to bring new insights into student learning processes and the factors involved to support learning and teaching. However, many of these learning analytic studies have drawn conclusions from limited data sets that are derived from a single course or program of study. This impacts the generalizability of noted outcomes and calls for research on larger institutional data sets. The institutional adoption and analysis of learning technology can provide deeper insights into a wide range of learning contexts in practice. This study focused on examining how instructors used the learning tool, OnTask, to provide personalized feedback for students in large classes. We collected usage data from 99 courses and 19,385 students to examine how the instructors customized feedback to different groups of students. The findings reveal that there is a significant association between the topics of feedback and students with different performance. The results also demonstrated that instructors most frequently provided feedback related to student assessment. The study emphasizes the importance of teacher and student feedback literacy for creating effective feedback loops.