{"title":"Detecting Item Parameter Drift in Small Sample Rasch Equating","authors":"Daniel Jurich, Chunyan Liu","doi":"10.1080/08957347.2023.2274567","DOIUrl":null,"url":null,"abstract":"ABSTRACTScreening items for parameter drift helps protect against serious validity threats and ensure score comparability when equating forms. Although many high-stakes credentialing examinations operate with small sample sizes, few studies have investigated methods to detect drift in small sample equating. This study demonstrates that several newly researched drift detection strategies can improve equating accuracy under certain conditions with small samples where some anchor items display item parameter drift. Results showed that the recently proposed methods mINFIT and mOUTFIT as well as the more conventional Robust-z helped mitigate the adverse effects of drifting anchor items in conditions with higher drift levels or with more than 75 examinees. In contrast, the Logit Difference approach excessively removed invariant anchor items. The discussion provides recommendations on how practitioners working with small samples can use the results to make more informed decisions regarding item parameter drift. Disclosure statementNo potential conflict of interest was reported by the author(s).Supplementary materialSupplemental data for this article can be accessed online at https://doi.org/10.1080/08957347.2023.2274567Notes1 In certain testing designs, some items may be reused as non-anchor items on future forms. Although IPD can occur on those items, we use the traditional IPD definition as specific to differential functioning in the items reused to serve as the equating anchor set.2 In IRT, the old form anchor item parameter estimates can also come from a pre-calibrated bank. However, we use the old and new form terminology as the simulation design involves directly equating to a previous form.3 For example, assume an item drifted in the 1.0 magnitude condition from b = 0 to 1 between Forms 1 and 2, this item would be treated as having a true b of 1.0 if selected on the Form 3.","PeriodicalId":51609,"journal":{"name":"Applied Measurement in Education","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Measurement in Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/08957347.2023.2274567","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
ABSTRACTScreening items for parameter drift helps protect against serious validity threats and ensure score comparability when equating forms. Although many high-stakes credentialing examinations operate with small sample sizes, few studies have investigated methods to detect drift in small sample equating. This study demonstrates that several newly researched drift detection strategies can improve equating accuracy under certain conditions with small samples where some anchor items display item parameter drift. Results showed that the recently proposed methods mINFIT and mOUTFIT as well as the more conventional Robust-z helped mitigate the adverse effects of drifting anchor items in conditions with higher drift levels or with more than 75 examinees. In contrast, the Logit Difference approach excessively removed invariant anchor items. The discussion provides recommendations on how practitioners working with small samples can use the results to make more informed decisions regarding item parameter drift. Disclosure statementNo potential conflict of interest was reported by the author(s).Supplementary materialSupplemental data for this article can be accessed online at https://doi.org/10.1080/08957347.2023.2274567Notes1 In certain testing designs, some items may be reused as non-anchor items on future forms. Although IPD can occur on those items, we use the traditional IPD definition as specific to differential functioning in the items reused to serve as the equating anchor set.2 In IRT, the old form anchor item parameter estimates can also come from a pre-calibrated bank. However, we use the old and new form terminology as the simulation design involves directly equating to a previous form.3 For example, assume an item drifted in the 1.0 magnitude condition from b = 0 to 1 between Forms 1 and 2, this item would be treated as having a true b of 1.0 if selected on the Form 3.
期刊介绍:
Because interaction between the domains of research and application is critical to the evaluation and improvement of new educational measurement practices, Applied Measurement in Education" prime objective is to improve communication between academicians and practitioners. To help bridge the gap between theory and practice, articles in this journal describe original research studies, innovative strategies for solving educational measurement problems, and integrative reviews of current approaches to contemporary measurement issues. Peer Review Policy: All review papers in this journal have undergone editorial screening and peer review.