PsychometrikaPub Date : 2023-06-01Epub Date: 2023-01-22DOI: 10.1007/s11336-022-09900-7
Yinghan Chen, Steven Andrew Culpepper, Yuguo Chen
{"title":"Bayesian Inference for an Unknown Number of Attributes in Restricted Latent Class Models.","authors":"Yinghan Chen, Steven Andrew Culpepper, Yuguo Chen","doi":"10.1007/s11336-022-09900-7","DOIUrl":"10.1007/s11336-022-09900-7","url":null,"abstract":"<p><p>The specification of the [Formula: see text] matrix in cognitive diagnosis models is important for correct classification of attribute profiles. Researchers have proposed many methods for estimation and validation of the data-driven [Formula: see text] matrices. However, inference of the number of attributes in the general restricted latent class model remains an open question. We propose a Bayesian framework for general restricted latent class models and use the spike-and-slab prior to avoid the computation issues caused by the varying dimensions of model parameters associated with the number of attributes, K. We develop an efficient Metropolis-within-Gibbs algorithm to estimate K and the corresponding [Formula: see text] matrix simultaneously. The proposed algorithm uses the stick-breaking construction to mimic an Indian buffet process and employs a novel Metropolis-Hastings transition step to encourage exploring the sample space associated with different values of K. We evaluate the performance of the proposed method through a simulation study under different model specifications and apply the method to a real data set related to a fluid intelligence matrix reasoning test.</p>","PeriodicalId":54534,"journal":{"name":"Psychometrika","volume":"88 2","pages":"613-635"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9641539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PsychometrikaPub Date : 2023-06-01Epub Date: 2023-04-18DOI: 10.1007/s11336-023-09909-6
Dirk Lubbe
{"title":"Advantages of Using Unweighted Approximation Error Measures for Model Fit Assessment.","authors":"Dirk Lubbe","doi":"10.1007/s11336-023-09909-6","DOIUrl":"10.1007/s11336-023-09909-6","url":null,"abstract":"<p><p>Fit indices are highly frequently used for assessing the goodness of fit of latent variable models. Most prominent fit indices, such as the root-mean-square error of approximation (RMSEA) or the comparative fit index (CFI), are based on a noncentrality parameter estimate derived from the model fit statistic. While a noncentrality parameter estimate is well suited for quantifying the amount of systematic error, the complex weighting function involved in its calculation makes indices derived from it challenging to interpret. Moreover, noncentrality-parameter-based fit indices yield systematically different values, depending on the indicators' level of measurement. For instance, RMSEA and CFI yield more favorable fit indices for models with categorical as compared to metric variables under otherwise identical conditions. In the present article, approaches for obtaining an approximation discrepancy estimate that is independent from any specific weighting function are considered. From these unweighted approximation error estimates, fit indices analogous to RMSEA and CFI are calculated and their finite sample properties are investigated using simulation studies. The results illustrate that the new fit indices consistently estimate their true value which, in contrast to other fit indices, is the same value for metric and categorical variables. Advantages with respect to interpretability are discussed and cutoff criteria for the new indices are considered.</p>","PeriodicalId":54534,"journal":{"name":"Psychometrika","volume":"88 2","pages":"413-433"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10188575/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9593162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PsychometrikaPub Date : 2023-06-01Epub Date: 2023-02-16DOI: 10.1007/s11336-023-09904-x
Ying Liu, Steven Andrew Culpepper, Yuguo Chen
{"title":"Identifiability of Hidden Markov Models for Learning Trajectories in Cognitive Diagnosis.","authors":"Ying Liu, Steven Andrew Culpepper, Yuguo Chen","doi":"10.1007/s11336-023-09904-x","DOIUrl":"10.1007/s11336-023-09904-x","url":null,"abstract":"<p><p>Hidden Markov models (HMMs) have been applied in various domains, which makes the identifiability issue of HMMs popular among researchers. Classical identifiability conditions shown in previous studies are too strong for practical analysis. In this paper, we propose generic identifiability conditions for discrete time HMMs with finite state space. Also, recent studies about cognitive diagnosis models (CDMs) applied first-order HMMs to track changes in attributes related to learning. However, the application of CDMs requires a known [Formula: see text] matrix to infer the underlying structure between latent attributes and items, and the identifiability constraints of the model parameters should also be specified. We propose generic identifiability constraints for our restricted HMM and then estimate the model parameters, including the [Formula: see text] matrix, through a Bayesian framework. We present Monte Carlo simulation results to support our conclusion and apply the developed model to a real dataset.</p>","PeriodicalId":54534,"journal":{"name":"Psychometrika","volume":"88 2","pages":"361-386"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9586847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PsychometrikaPub Date : 2023-06-01Epub Date: 2022-10-01DOI: 10.1007/s11336-022-09887-1
Zhenghao Zeng, Yuqi Gu, Gongjun Xu
{"title":"A Tensor-EM Method for Large-Scale Latent Class Analysis with Binary Responses.","authors":"Zhenghao Zeng, Yuqi Gu, Gongjun Xu","doi":"10.1007/s11336-022-09887-1","DOIUrl":"10.1007/s11336-022-09887-1","url":null,"abstract":"<p><p>Latent class models are powerful statistical modeling tools widely used in psychological, behavioral, and social sciences. In the modern era of data science, researchers often have access to response data collected from large-scale surveys or assessments, featuring many items (large J) and many subjects (large N). This is in contrary to the traditional regime with fixed J and large N. To analyze such large-scale data, it is important to develop methods that are both computationally efficient and theoretically valid. In terms of computation, the conventional EM algorithm for latent class models tends to have a slow algorithmic convergence rate for large-scale data and may converge to some local optima instead of the maximum likelihood estimator (MLE). Motivated by this, we introduce the tensor decomposition perspective into latent class analysis with binary responses. Methodologically, we propose to use a moment-based tensor power method in the first step and then use the obtained estimates as initialization for the EM algorithm in the second step. Theoretically, we establish the clustering consistency of the MLE in assigning subjects into latent classes when N and J both go to infinity. Simulation studies suggest that the proposed tensor-EM pipeline enjoys both good accuracy and computational efficiency for large-scale data with binary responses. We also apply the proposed method to an educational assessment dataset as an illustration.</p>","PeriodicalId":54534,"journal":{"name":"Psychometrika","volume":"88 2","pages":"580-612"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9579478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PsychometrikaPub Date : 2023-06-01Epub Date: 2023-03-28DOI: 10.1007/s11336-023-09910-z
Øystein Sørensen, Anders M Fjell, Kristine B Walhovd
{"title":"Longitudinal Modeling of Age-Dependent Latent Traits with Generalized Additive Latent and Mixed Models.","authors":"Øystein Sørensen, Anders M Fjell, Kristine B Walhovd","doi":"10.1007/s11336-023-09910-z","DOIUrl":"10.1007/s11336-023-09910-z","url":null,"abstract":"<p><p>We present generalized additive latent and mixed models (GALAMMs) for analysis of clustered data with responses and latent variables depending smoothly on observed variables. A scalable maximum likelihood estimation algorithm is proposed, utilizing the Laplace approximation, sparse matrix computation, and automatic differentiation. Mixed response types, heteroscedasticity, and crossed random effects are naturally incorporated into the framework. The models developed were motivated by applications in cognitive neuroscience, and two case studies are presented. First, we show how GALAMMs can jointly model the complex lifespan trajectories of episodic memory, working memory, and speed/executive function, measured by the California Verbal Learning Test (CVLT), digit span tests, and Stroop tests, respectively. Next, we study the effect of socioeconomic status on brain structure, using data on education and income together with hippocampal volumes estimated by magnetic resonance imaging. By combining semiparametric estimation with latent variable modeling, GALAMMs allow a more realistic representation of how brain and cognition vary across the lifespan, while simultaneously estimating latent traits from measured items. Simulation experiments suggest that model estimates are accurate even with moderate sample sizes.</p>","PeriodicalId":54534,"journal":{"name":"Psychometrika","volume":"88 2","pages":"456-486"},"PeriodicalIF":2.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10188428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10299581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}