Exploring the use of Rasch modelling in "common content" items for multi-site and multi-year assessment.

IF 3 2区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
David Hope, David Kluth, Matthew Homer, Avril Dewar, Rikki Goddard-Fuller, Alan Jaap, Helen Cameron
{"title":"Exploring the use of Rasch modelling in \"common content\" items for multi-site and multi-year assessment.","authors":"David Hope, David Kluth, Matthew Homer, Avril Dewar, Rikki Goddard-Fuller, Alan Jaap, Helen Cameron","doi":"10.1007/s10459-024-10354-y","DOIUrl":null,"url":null,"abstract":"<p><p>Rasch modelling is a powerful tool for evaluating item performance, measuring drift in difficulty over time, and comparing students who sat assessments at different times or at different sites. Here, we use data from thirty UK medical schools to describe the benefits of Rasch modelling in quality assurance and the barriers to using it. Sixty \"common content\" multiple choice items were offered to all UK medical schools in 2016-17, and a further sixty in 2017-18, with five available in both years. Thirty medical schools participated, for sixty total datasets across two sessions, and 14,342 individual sittings. Schools selected items to embed in written assessment near the end of their programmes. We applied Rasch modelling to evaluate unidimensionality, model fit statistics and item quality, horizontal equating to compare performance across schools, and vertical equating to compare item performance across time. Of the sixty sittings, three provided non-unidimensional data, and eight violated goodness of fit measures. Item-level statistics identified potential improvements in item construction and provided quality assurance. Horizontal equating demonstrated large differences in scores across schools, while vertical equating showed item characteristics were stable across sessions. Rasch modelling provides significant advantages in model- and item- level reporting compared to classical approaches. However, the complexity of the analysis and the smaller number of educators familiar with Rasch must be addressed locally for a programme to benefit. Furthermore, due to the comparative novelty of Rasch modelling, there is greater ambiguity on how to proceed when a Rasch model identifies misfitting or problematic data.</p>","PeriodicalId":50959,"journal":{"name":"Advances in Health Sciences Education","volume":" ","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Health Sciences Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1007/s10459-024-10354-y","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

Rasch modelling is a powerful tool for evaluating item performance, measuring drift in difficulty over time, and comparing students who sat assessments at different times or at different sites. Here, we use data from thirty UK medical schools to describe the benefits of Rasch modelling in quality assurance and the barriers to using it. Sixty "common content" multiple choice items were offered to all UK medical schools in 2016-17, and a further sixty in 2017-18, with five available in both years. Thirty medical schools participated, for sixty total datasets across two sessions, and 14,342 individual sittings. Schools selected items to embed in written assessment near the end of their programmes. We applied Rasch modelling to evaluate unidimensionality, model fit statistics and item quality, horizontal equating to compare performance across schools, and vertical equating to compare item performance across time. Of the sixty sittings, three provided non-unidimensional data, and eight violated goodness of fit measures. Item-level statistics identified potential improvements in item construction and provided quality assurance. Horizontal equating demonstrated large differences in scores across schools, while vertical equating showed item characteristics were stable across sessions. Rasch modelling provides significant advantages in model- and item- level reporting compared to classical approaches. However, the complexity of the analysis and the smaller number of educators familiar with Rasch must be addressed locally for a programme to benefit. Furthermore, due to the comparative novelty of Rasch modelling, there is greater ambiguity on how to proceed when a Rasch model identifies misfitting or problematic data.

探索在多地点和多年评估的 "共同内容 "项目中使用 Rasch 模型。
Rasch 建模是一种功能强大的工具,可用于评估项目绩效、测量难度随时间的变化以及比较在不同时间或不同地点参加评估的学生。在此,我们使用来自英国 30 所医学院的数据来描述 Rasch 建模在质量保证方面的优势以及使用它的障碍。2016-17学年向所有英国医学院校提供了60个 "共同内容 "的多项选择题,2017-18学年又提供了60个,两年都提供了5个。有 30 所医学院校参与,两届共有 60 个数据集,14342 次单独考试。学校在课程即将结束时选择项目进行书面评估。我们采用 Rasch 建模来评估单维性、模型拟合统计和项目质量,采用横向等效来比较不同学校的成绩,采用纵向等效来比较不同时间段的项目成绩。在 60 次测试中,有 3 次提供了非单维数据,8 次违反了拟合度指标。项目层面的统计发现了项目构建中可能存在的改进,并提供了质量保证。横向等差数列显示出不同学校之间得分的巨大差异,而纵向等差数列则显示出不同阶段的项目特征是稳定的。与传统方法相比,Rasch 模型在模型和项目层面的报告方面具有显著优势。然而,分析的复杂性和熟悉 Rasch 的教育工作者人数较少的问题必须在当地加以解决,才能使计划受益。此外,由于 Rasch 建模比较新颖,当 Rasch 模型识别出不匹配或有问题的数据时,如何进行下一步工作就比较模糊。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.90
自引率
12.50%
发文量
86
审稿时长
>12 weeks
期刊介绍: Advances in Health Sciences Education is a forum for scholarly and state-of-the art research into all aspects of health sciences education. It will publish empirical studies as well as discussions of theoretical issues and practical implications. The primary focus of the Journal is linking theory to practice, thus priority will be given to papers that have a sound theoretical basis and strong methodology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信