Efficiency Analysis of Item Response Theory Kernel Equating for Mixed-Format Tests

IF 1 4区 心理学 Q4 PSYCHOLOGY, MATHEMATICAL
Joakim Wallmark, Maria Josefsson, Marie Wiberg
{"title":"Efficiency Analysis of Item Response Theory Kernel Equating for Mixed-Format Tests","authors":"Joakim Wallmark, Maria Josefsson, Marie Wiberg","doi":"10.1177/01466216231209757","DOIUrl":null,"url":null,"abstract":"This study aims to evaluate the performance of Item Response Theory (IRT) kernel equating in the context of mixed-format tests by comparing it to IRT observed score equating and kernel equating with log-linear presmoothing. Comparisons were made through both simulations and real data applications, under both equivalent groups (EG) and non-equivalent groups with anchor test (NEAT) sampling designs. To prevent bias towards IRT methods, data were simulated with and without the use of IRT models. The results suggest that the difference between IRT kernel equating and IRT observed score equating is minimal, both in terms of the equated scores and their standard errors. The application of IRT models for presmoothing yielded smaller standard error of equating than the log-linear presmoothing approach. When test data were generated using IRT models, IRT-based methods proved less biased than log-linear kernel equating. However, when data were simulated without IRT models, log-linear kernel equating showed less bias. Overall, IRT kernel equating shows great promise when equating mixed-format tests.","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Psychological Measurement","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/01466216231209757","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"PSYCHOLOGY, MATHEMATICAL","Score":null,"Total":0}
引用次数: 0

Abstract

This study aims to evaluate the performance of Item Response Theory (IRT) kernel equating in the context of mixed-format tests by comparing it to IRT observed score equating and kernel equating with log-linear presmoothing. Comparisons were made through both simulations and real data applications, under both equivalent groups (EG) and non-equivalent groups with anchor test (NEAT) sampling designs. To prevent bias towards IRT methods, data were simulated with and without the use of IRT models. The results suggest that the difference between IRT kernel equating and IRT observed score equating is minimal, both in terms of the equated scores and their standard errors. The application of IRT models for presmoothing yielded smaller standard error of equating than the log-linear presmoothing approach. When test data were generated using IRT models, IRT-based methods proved less biased than log-linear kernel equating. However, when data were simulated without IRT models, log-linear kernel equating showed less bias. Overall, IRT kernel equating shows great promise when equating mixed-format tests.
项目反应理论核等价在混合格式测试中的有效性分析
本研究旨在通过将项目反应理论(IRT)核等价与IRT观察得分等价和对数线性预平滑核等价进行比较,评价项目反应理论核等价在混合格式测试中的表现。通过模拟和实际数据应用,在锚点试验(NEAT)抽样设计的等效组(EG)和非等效组(non-equivalent groups)下进行了比较。为了防止对IRT方法的偏见,在使用和不使用IRT模型的情况下对数据进行了模拟。结果表明,IRT内核相等和IRT观察到的分数相等之间的差异是最小的,无论是在相等的分数和它们的标准误差方面。应用IRT模型进行预平滑比采用对数线性预平滑方法得到更小的方程标准误差。当使用IRT模型生成测试数据时,基于IRT的方法被证明比对数线性核方程的偏差更小。然而,当没有IRT模型的数据模拟时,对数线性核方程显示出较小的偏差。总的来说,IRT内核等价在等价混合格式测试时显示了很大的希望。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.30
自引率
8.30%
发文量
50
期刊介绍: Applied Psychological Measurement publishes empirical research on the application of techniques of psychological measurement to substantive problems in all areas of psychology and related disciplines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信