{"title":"The Optimal Design of Bifactor Multidimensional Computerized Adaptive Testing with Mixed-format Items.","authors":"Xiuzhen Mao, Jiahui Zhang, Tao Xin","doi":"10.1177/01466216221108382","DOIUrl":"10.1177/01466216221108382","url":null,"abstract":"<p><p>Multidimensional computerized adaptive testing (MCAT) using mixed-format items holds great potential for the next-generation assessments. Two critical factors in the mixed-format test design (i.e., the order and proportion of polytomous items) and item selection were addressed in the context of mixed-format bifactor MCAT. For item selection, this article presents the derivation of the Fisher information matrix of the bifactor graded response model and the application of the bifactor dimension reduction method to simplify the computation of the mutual information (MI) item selection method. In a simulation study, different MCAT designs were compared with varying proportions of polytomous items (0.2-0.6, 1), different item-delivering formats (DPmix: delivering polytomous items at the final stage; RPmix: random delivering), three bifactor patterns (low, middle, and high), and two item selection methods (Bayesian D-optimality and MI). Simulation results suggested that a) the overall estimation precision increased with a higher bifactor pattern; b) the two item selection methods did not show substantial differences in estimation precision; and c) the RPmix format always led to more precise interim and final estimates than the DPmix format. The proportions of 0.3 and 0.4 were recommended for the RPmix and DPmix formats, respectively.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 7","pages":"605-621"},"PeriodicalIF":1.2,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9483217/pdf/10.1177_01466216221108382.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33466926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncovering the Complexity of Item Position Effects in a Low-Stakes Testing Context.","authors":"Thai Q Ong, Dena A Pastor","doi":"10.1177/01466216221108134","DOIUrl":"10.1177/01466216221108134","url":null,"abstract":"<p><p>Previous researchers have only either adopted an item or examinee perspective to position effects, where they focused on exploring the relationships among position effects and item or examinee variables separately. Unlike previous researchers, we adopted an integrated perspective to position effects, where we focused on exploring the relationships among position effects, item variables, and examinee variables simultaneously. We evaluated the degree to which position effects on two separate low-stakes tests administered to two different samples were moderated by different item (item length, number of response options, mental taxation, and graphic) and examinee (effort, change in effort, and gender) variables. Items exhibited significant negative linear position effects on both tests, with the magnitude of the position effects varying from item to item. Longer items were more prone to position effects than shorter items; however, the level of mental taxation required to answer the item, the presence of a graphic, and the number of response options were not related to position effects. Examinee effort levels, change in effort patterns, and gender did not moderate the relationships among position effects and item features.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 7","pages":"571-588"},"PeriodicalIF":1.2,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9483218/pdf/10.1177_01466216221108134.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33466447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Termination Criteria for Grid Multiclassification Adaptive Testing With Multidimensional Polytomous Items.","authors":"Zhuoran Wang, Chun Wang, David J Weiss","doi":"10.1177/01466216221108383","DOIUrl":"10.1177/01466216221108383","url":null,"abstract":"<p><p>Adaptive classification testing (ACT) is a variation of computerized adaptive testing (CAT) that is developed to efficiently classify examinees into multiple groups based on predetermined cutoffs. In multidimensional multiclassification (i.e., more than two categories exist along each dimension), grid classification is proposed to classify each examinee into one of the grids encircled by cutoffs (lines/surfaces) along different dimensions so as to provide clearer information regarding an examinee's relative standing along each dimension and facilitate subsequent treatment and intervention. In this article, the sequential probability ratio test (SPRT) and confidence interval method were implemented in the grid multiclassification ACT. In addition, two new termination criteria, the grid classification generalized likelihood ratio (GGLR) and simplified grid classification generalized likelihood ratio were proposed for grid multiclassification ACT. Simulation studies, using a simulated item bank, and a real item bank with polytomous multidimensional items, show that grid multiclassification ACT is more efficient than classification based on measurement CAT that focuses on trait estimate precision. In the context of a high-quality bank, GGLR was found to most efficiently terminate the grid multiclassification ACT and classify examinees.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 7","pages":"551-570"},"PeriodicalIF":1.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9483219/pdf/10.1177_01466216221108383.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33466449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating the Effect of Differential Rapid Guessing on Population Invariance in Equating.","authors":"Jiayi Deng, Joseph A Rios","doi":"10.1177/01466216221108991","DOIUrl":"10.1177/01466216221108991","url":null,"abstract":"<p><p>Score equating is an essential tool in improving the fairness of test score interpretations when employing multiple test forms. To ensure that the equating functions used to connect scores from one form to another are valid, they must be invariant across different populations of examinees. Given that equating is used in many low-stakes testing programs, examinees' test-taking effort should be considered carefully when evaluating population invariance in equating, particularly as the occurrence of rapid guessing (RG) has been found to differ across subgroups. To this end, the current study investigated whether differential RG rates between subgroups can lead to incorrect inferences concerning population invariance in test equating. A simulation was built to generate data for two examinee subgroups (one more motivated than the other) administered two alternative forms of multiple-choice items. The rate of RG and ability characteristics of rapid guessers were manipulated. Results showed that as RG responses increased, false positive and false negative inferences of equating invariance were respectively observed at the lower and upper ends of the observed score scale. This result was supported by an empirical analysis of an international assessment. These findings suggest that RG should be investigated and documented prior to test equating, especially in low-stakes assessment contexts. A failure to do so may lead to incorrect inferences concerning fairness in equating.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 7","pages":"589-604"},"PeriodicalIF":1.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9483216/pdf/10.1177_01466216221108991.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33466450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leslie Rutkowski, Yuan-Ling Liaw, Dubravka Svetina, David Rutkowski
{"title":"Multistage Testing in Heterogeneous Populations: Some Design and Implementation Considerations.","authors":"Leslie Rutkowski, Yuan-Ling Liaw, Dubravka Svetina, David Rutkowski","doi":"10.1177/01466216221108123","DOIUrl":"https://doi.org/10.1177/01466216221108123","url":null,"abstract":"<p><p>A central challenge in international large-scale assessments is adequately measuring dozens of highly heterogeneous populations, many of which are low performers. To that end, multistage adaptive testing offers one possibility for better assessing across the achievement continuum. This study examines the way that several multistage test design and implementation choices can impact measurement performance in this setting. To attend to gaps in the knowledge base, we extended previous research to include multiple, linked panels, more appropriate estimates of achievement, and multiple populations of varied proficiency. Including achievement distributions from varied populations and associated item parameters, we design and execute a simulation study that mimics an established international assessment. We compare several routing schemes and varied module lengths in terms of item and person parameter recovery. Our findings suggest that, particularly for low performing populations, multistage testing offers precision advantages. Further, findings indicate that equal module lengths-desirable for controlling position effects-and classical routing methods, which lower the technological burden of implementing such a design, produce good results. Finally, probabilistic misrouting offers advantages over merit routing for controlling bias in item and person parameters. Overall, multistage testing shows promise for extending the scope of international assessments. We discuss the importance of our findings for operational work in the international assessment domain.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"494-508"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382094/pdf/10.1177_01466216221108123.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10189453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Characterizing Sampling Variability for Item Response Theory Scale Scores in a Fixed-Parameter Calibrated Projection Design.","authors":"Shuangshuang Xu, Yang Liu","doi":"10.1177/01466216221108136","DOIUrl":"https://doi.org/10.1177/01466216221108136","url":null,"abstract":"<p><p>A common practice of linking uses estimated item parameters to calculate projected scores. This procedure fails to account for the carry-over sampling variability. Neglecting sampling variability could consequently lead to understated uncertainty for Item Response Theory (IRT) scale scores. To address the issue, we apply a Multiple Imputation (MI) approach to adjust the Posterior Standard Deviations of IRT scale scores. The MI procedure involves drawing multiple sets of plausible values from an approximate sampling distribution of the estimated item parameters. When two scales to be linked were previously calibrated, item parameters can be fixed at their original published scales, and the latent variable means and covariances of the two scales can then be estimated conditional on the fixed item parameters. The conditional estimation procedure is a special case of Restricted Recalibration (RR), in which the asymptotic sampling distribution of estimated parameters follows from the general theory of pseudo Maximum Likelihood (ML) estimation. We evaluate the combination of RR and MI by a simulation study to examine the impact of carry-over sampling variability under various simulation conditions. We also illustrate how to apply the proposed method to real data by revisiting Thissen et al. (2015).</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"509-528"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382091/pdf/10.1177_01466216221108136.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10133732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of Sampling Variance of Item Response Theory Parameter Estimates in Detecting Outliers in Common Item Equating.","authors":"Chunyan Liu, Daniel Jurich","doi":"10.1177/01466216221108122","DOIUrl":"https://doi.org/10.1177/01466216221108122","url":null,"abstract":"<p><p>In common item equating, the existence of item outliers may impact the accuracy of equating results and bring significant ramifications to the validity of test score interpretations. Therefore, common item equating should involve a screening process to flag outlying items and exclude them from the common item set before equating is conducted. The current simulation study demonstrated that the sampling variance associated with the item response theory (IRT) item parameter estimates can help detect outliers in the common items under the 2-PL and 3-PL IRT models. The results showed the proposed sampling variance statistic (<i>SV</i>) outperformed the traditional displacement method with cutoff values of 0.3 and 0.5 along a variety of evaluation criteria. Based on the favorable results, item outlier detection statistics based on estimated sampling variability warrant further consideration in both research and practice.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"529-547"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382092/pdf/10.1177_01466216221108122.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10487809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two New Models for Item Preknowledge.","authors":"Kylie Gorney, James A Wollack","doi":"10.1177/01466216221108130","DOIUrl":"https://doi.org/10.1177/01466216221108130","url":null,"abstract":"<p><p>To evaluate preknowledge detection methods, researchers often conduct simulation studies in which they use models to generate the data. In this article, we propose two new models to represent item preknowledge. Contrary to existing models, we allow the impact of preknowledge to vary across persons and items in order to better represent situations that are encountered in practice. We use three real data sets to evaluate the fit of the new models with respect to two types of preknowledge: items only, and items and the correct answer key. Results show that the two new models provide the best fit compared to several other existing preknowledge models. Furthermore, model parameter estimates were found to vary substantially depending on the type of preknowledge being considered, indicating that answer key disclosure has a profound impact on testing behavior.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"447-461"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382093/pdf/10.1177_01466216221108130.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10487814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Item-Fit Statistic Based on Posterior Probabilities of Membership in Ability Groups.","authors":"Bartosz Kondratek","doi":"10.1177/01466216221108061","DOIUrl":"https://doi.org/10.1177/01466216221108061","url":null,"abstract":"<p><p>A novel approach to item-fit analysis based on an asymptotic test is proposed. The new test statistic, <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> </mrow> </math> , compares pseudo-observed and expected item mean scores over a set of ability bins. The item mean scores are computed as weighted means with weights based on test-takers' <i>a posteriori</i> density of ability within the bin. This article explores the properties of <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> </mrow> </math> in case of dichotomously scored items for unidimensional IRT models. Monte Carlo experiments were conducted to analyze the performance of <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> </mrow> </math> . Type I error of <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> <mo> </mo></mrow> </math> was acceptably close to the nominal level and it had greater power than Orlando and Thissen's <math><mrow><mi>S</mi> <mo>-</mo> <msup><mi>x</mi> <mn>2</mn></msup> </mrow> </math> . Under some conditions, power of <math> <mrow><msubsup><mi>χ</mi> <mi>w</mi> <mn>2</mn></msubsup> </mrow> </math> also exceeded the one reported for the computationally more demanding Stone's <math> <mrow><msup><mi>χ</mi> <mrow><mn>2</mn> <mo>∗</mo></mrow> </msup> </mrow> </math> .</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"462-478"},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382089/pdf/10.1177_01466216221108061.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10132911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Item Response Theory True Score Equating for the Bifactor Model Under the Common-Item Nonequivalent Groups Design.","authors":"Kyung Yong Kim","doi":"10.1177/01466216221108995","DOIUrl":"10.1177/01466216221108995","url":null,"abstract":"<p><p>Applying item response theory (IRT) true score equating to multidimensional IRT models is not straightforward due to the one-to-many relationship between a true score and latent variables. Under the common-item nonequivalent groups design, the purpose of the current study was to introduce two IRT true score equating procedures that adopted different dimension reduction strategies for the bifactor model. The first procedure, which was referred to as the integration procedure, linked the latent variable scales for the bifactor model and integrated out the specific factors from the item response function of the bifactor model. Then, IRT true score equating was applied to the marginalized bifactor model. The second procedure, which was referred to as the PIRT-based procedure, projected the specific dimensions onto the general dimension to obtain a locally dependent unidimensional IRT (UIRT) model and linked the scales of the UIRT model, followed by the application of IRT true score equating to the locally dependent UIRT model. Equating results obtained with the two equating procedures along with those obtained with the unidimensional three-parameter logistic (3PL) model were compared using both simulated and real data. In general, the integration and PIRT-based procedures provided equating results that were not practically different. Furthermore, the equating results produced by the two bifactor-based procedures became more accurate than the results returned by the 3PL model as tests became more multidimensional.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 6","pages":"479-493"},"PeriodicalIF":1.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9382090/pdf/10.1177_01466216221108995.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10189451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}