{"title":"glca: An R Package for Multiple-Group Latent Class Analysis.","authors":"Youngsun Kim, Saebom Jeon, Chi Chang, Hwan Chung","doi":"10.1177/01466216221084197","DOIUrl":"10.1177/01466216221084197","url":null,"abstract":"<p><p>Group similarities and differences may manifest themselves in a variety of ways in multiple-group latent class analysis (LCA). Sometimes, measurement models are identical across groups in LCA. In other situations, the measurement models may differ, suggesting that the latent structure itself is different between groups. Tests of measurement invariance shed light on this distinction. We created an R package glca that implements procedures for exploring differences in latent class structure between populations, taking multilevel data structure into account. The glca package deals with the fixed-effect LCA and the nonparametric random-effect LCA; the former can be applied in the situation where populations are segmented by the observed group variable itself, whereas the latter can be used when there are too many levels in the group variable to make a meaningful group comparisons by identifying a group-level latent variable. The glca package consists of functions for statistical test procedures for exploring group differences in various LCA models considering multilevel data structure.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 5","pages":"439-441"},"PeriodicalIF":1.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9265491/pdf/10.1177_01466216221084197.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10091269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bridging Models of Biometric and Psychometric Assessment: A Three-Way Joint Modeling Approach of Item Responses, Response Times, and Gaze Fixation Counts.","authors":"Kaiwen Man, Jeffrey R Harring, Peida Zhan","doi":"10.1177/01466216221089344","DOIUrl":"https://doi.org/10.1177/01466216221089344","url":null,"abstract":"<p><p>Recently, joint models of item response data and response times have been proposed to better assess and understand test takers' learning processes. This article demonstrates how biometric information such as gaze fixation counts obtained from an eye-tracking machine can be integrated into the measurement model. The proposed joint modeling framework accommodates the relations among a test taker's latent ability, working speed and test engagement level via a person-side variance-covariance structure, while simultaneously permitting the modeling of item difficulty, time-intensity, and the engagement intensity through an item-side variance-covariance structure. A Bayesian estimation scheme is used to fit the proposed model to data. Posterior predictive model checking based on three discrepancy measures corresponding to various model components are introduced to assess model-data fit. Findings from a Monte Carlo simulation and results from analyzing experimental data demonstrate the utility of the model.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 5","pages":"361-381"},"PeriodicalIF":1.2,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9265489/pdf/10.1177_01466216221089344.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10091266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Item Response Theory Models With Flexible Generalized Logit Links.","authors":"Jiwei Zhang, Ying-Ying Zhang, Jian Tao, Ming-Hui Chen","doi":"10.1177/01466216221089343","DOIUrl":"10.1177/01466216221089343","url":null,"abstract":"<p><p>In educational and psychological research, the logit and probit links are often used to fit the binary item response data. The appropriateness and importance of the choice of links within the item response theory (IRT) framework has not been investigated yet. In this paper, we present a family of IRT models with generalized logit links, which include the traditional logistic and normal ogive models as special cases. This family of models are flexible enough not only to adjust the item characteristic curve tail probability by two shape parameters but also to allow us to fit the same link or different links to different items within the IRT model framework. In addition, the proposed models are implemented in the Stan software to sample from the posterior distributions. Using readily available Stan outputs, the four Bayesian model selection criteria are computed for guiding the choice of the links within the IRT model framework. Extensive simulation studies are conducted to examine the empirical performance of the proposed models and the model fittings in terms of \"in-sample\" and \"out-of-sample\" predictions based on the deviance. Finally, a detailed analysis of the real reading assessment data is carried out to illustrate the proposed methodology.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 5","pages":"382-405"},"PeriodicalIF":1.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9265488/pdf/10.1177_01466216221089343.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10091271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-Objective Item Selection Methods in Computerized Adaptive Test Using the Higher-Order Cognitive Diagnostic Models.","authors":"Chongqin Xi, Dongbo Tu, Yan Cai","doi":"10.1177/01466216221089342","DOIUrl":"https://doi.org/10.1177/01466216221089342","url":null,"abstract":"<p><p>To efficiently obtain information about both the general abilities and detailed cognitive profiles of examinees from a single model that uses a single-calibration process, higher-order cognitive diagnostic computerized adaptive testing (CD-CAT) that employ higher-order cognitive diagnostic models have been developed. However, the current item selection methods used in higher-order CD-CAT adaptively select items according to only the attribute profiles, which might lead to low precision regarding general abilities; hence, an appropriate method was proposed for this CAT system in this study. Under the framework of the higher-order models, the responses were affected by attribute profiles, which were governed by general abilities. It is reasonable to hold that the item responses were affected by a combination of general abilities and attribute profiles. Based on the logic of Shannon entropy and the generalized deterministic, inputs, noisy \"and\" gate (G-DINA) model discrimination index (GDI), two new item selection methods were proposed for higher-order CD-CAT by considering the above combination in this study. The simulation results demonstrated that the new methods achieved more accurate estimations of both general abilities and cognitive profiles than the existing methods and maintained distinct advantages in terms of item pool usage.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 5","pages":"422-438"},"PeriodicalIF":1.2,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9265487/pdf/10.1177_01466216221089342.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10091270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tyler Strachan, Uk Hyun Cho, Terry Ackerman, Shyh-Huei Chen, Jimmy de la Torre, Edward H Ip
{"title":"Evaluation of the Linear Composite Conjecture for Unidimensional IRT Scale for Multidimensional Responses.","authors":"Tyler Strachan, Uk Hyun Cho, Terry Ackerman, Shyh-Huei Chen, Jimmy de la Torre, Edward H Ip","doi":"10.1177/01466216221084218","DOIUrl":"https://doi.org/10.1177/01466216221084218","url":null,"abstract":"<p><p>The linear composite direction represents, theoretically, where the unidimensional scale would lie within a multidimensional latent space. Using compensatory multidimensional IRT, the linear composite can be derived from the structure of the items and the latent distribution. The purpose of this study was to evaluate the validity of the linear composite conjecture and examine how well a fitted unidimensional IRT model approximates the linear composite direction in a multidimensional latent space. Simulation experiment results overall show that the fitted unidimensional IRT model sufficiently approximates linear composite direction when correlation between bivariate latent variables is positive. When the correlation between bivariate latent variables is negative, instability occurs when the fitted unidimensional IRT model is used to approximate linear composite direction. A real data experiment was also conducted using 20 items from a multiple-choice mathematics test from American College Testing.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 5","pages":"347-360"},"PeriodicalIF":1.2,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9265490/pdf/10.1177_01466216221084218.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10091268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xue-Lan Qiu, Jimmy de la Torre, Sage Ro, Wen-Chung Wang
{"title":"Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items: Algorithm Development and Applications.","authors":"Xue-Lan Qiu, Jimmy de la Torre, Sage Ro, Wen-Chung Wang","doi":"10.1177/01466216221084209","DOIUrl":"https://doi.org/10.1177/01466216221084209","url":null,"abstract":"<p><p>A computerized adaptive testing (CAT) solution for tests with multidimensional pairwise-comparison (MPC) items, aiming to measure career interest, value, and personality, is rare. This paper proposes new item selection and exposure control methods for CAT with dichotomous and polytomous MPC items and present simulation study results. The results show that the procedures are effective in selecting items and controlling within-person statement exposure with no loss of efficiency. Implications are discussed in two applications of the proposed CAT procedures: a work attitude test with dichotomous MPC items and a career interest assessment with polytomous MPC items.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 4","pages":"255-272"},"PeriodicalIF":1.2,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9118927/pdf/10.1177_01466216221084209.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9609917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Luo, Daxun Wang, Zhiming Guo, Yan Cai, Dongbo Tu
{"title":"Combining Cognitive Diagnostic Computerized Adaptive Testing With Multidimensional Item Response Theory.","authors":"Hao Luo, Daxun Wang, Zhiming Guo, Yan Cai, Dongbo Tu","doi":"10.1177/01466216221084214","DOIUrl":"https://doi.org/10.1177/01466216221084214","url":null,"abstract":"<p><p>The new generation of tests not only focuses on the general ability but also the process of finer-grained skills. Under the guidance of this thought, researchers have developed a dual-purpose CD-CAT (Dual-CAT). In the existing Dual-CAT, the models used in overall ability estimation are unidimensional IRT models, which cannot apply to the multidimensional tests. This article intends to develop a multidimensional Dual-CAT to improve its applicability. To achieve this goal, this article firstly proposes some item selection methods for the multidimensional Dual-CAT, and then verifies the estimation accuracy and exposure rate of these methods through both simulation study and a real item bank study. The results show that the established multidimensional Dual-CAT is effective and the new proposed methods outperform the traditional methods. Finally, this article discusses the future direction of the Dual-CAT.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 4","pages":"288-302"},"PeriodicalIF":1.2,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9118931/pdf/10.1177_01466216221084214.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9911725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Examinees With Item Preknowledge on Real Data.","authors":"Dmitry I Belov, Sarah L Toton","doi":"10.1177/01466216221084202","DOIUrl":"https://doi.org/10.1177/01466216221084202","url":null,"abstract":"<p><p>Recently, Belov & Wollack (2021) developed a method for detecting groups of colluding examinees as cliques in a graph. The objective of this article is to study how the performance of their method on real data with item preknowledge (IP) depends on the mechanism of edge formation governed by a response similarity index (RSI). This study resulted in the development of three new RSIs and demonstrated a remarkable advantage of combining responses and response times for detecting examinees with IP. Possible extensions of this study and recommendations for practitioners were formulated.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 4","pages":"273-287"},"PeriodicalIF":1.2,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9118928/pdf/10.1177_01466216221084202.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9609916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Potential for Interpretational Confounding in Cognitive Diagnosis Models.","authors":"Qi Helen Huang, Daniel M Bolt","doi":"10.1177/01466216221084207","DOIUrl":"10.1177/01466216221084207","url":null,"abstract":"<p><p>Binary examinee mastery/nonmastery classifications in cognitive diagnosis models may often be an approximation to proficiencies that are better regarded as continuous. Such misspecification can lead to inconsistencies in the operational definition of \"mastery\" when binary skills models are assumed. In this paper we demonstrate the potential for an interpretational confounding of the latent skills when truly continuous skills are treated as binary. Using the DINA model as an example, we show how such forms of confounding can be observed through item and/or examinee parameter change when (1) different collections of items (such as representing different test forms) previously calibrated separately are subsequently calibrated together; and (2) when structural restrictions are placed on the relationships among skill attributes (such as the assumption of strictly nonnegative growth over time), among other possibilities. We examine these occurrences in both simulation and real data studies. It is suggested that researchers should regularly attend to the potential for interpretational confounding by studying differences in attribute mastery proportions and/or changes in item parameter (e.g., slip and guess) estimates attributable to skill continuity when the same samples of examinees are administered different test forms, or the same test forms are involved in different calibrations.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 4","pages":"303-320"},"PeriodicalIF":1.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9118932/pdf/10.1177_01466216221084207.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9609918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparison of Modern and Popular Approaches to Calculating Reliability for Dichotomously Scored Items.","authors":"Sébastien Béland, Carl F Falk","doi":"10.1177/01466216221084210","DOIUrl":"10.1177/01466216221084210","url":null,"abstract":"<p><p>Recent work on reliability coefficients has largely focused on continuous items, including critiques of Cronbach's alpha. Although two new model-based reliability coefficients have been proposed for dichotomous items (Dimitrov, 2003a,b; Green & Yang, 2009a), these approaches have yet to be compared to each other or other popular estimates of reliability such as omega, alpha, and the greatest lower bound. We seek computational improvements to one of these model-based reliability coefficients and, in addition, conduct initial Monte Carlo simulations to compare coefficients using dichotomous data. Our results suggest that such improvements to the model-based approach are warranted, while model-based approaches were generally superior.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"46 1","pages":"321-337"},"PeriodicalIF":1.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9118929/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41659739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}