{"title":"Croon's Bias-Corrected Estimation for Multilevel Structural Equation Models with Non-Normal Indicators and Model Misspecifications.","authors":"Kyle Cox, Benjamin Kelcey","doi":"10.1177/00131644221080451","DOIUrl":"10.1177/00131644221080451","url":null,"abstract":"<p><p>Multilevel structural equation models (MSEMs) are well suited for educational research because they accommodate complex systems involving latent variables in multilevel settings. Estimation using Croon's bias-corrected factor score (BCFS) path estimation has recently been extended to MSEMs and demonstrated promise with limited sample sizes. This makes it well suited for planned educational research which often involves sample sizes constrained by logistical and financial factors. However, the performance of BCFS estimation with MSEMs has yet to be thoroughly explored under common but difficult conditions including in the presence of non-normal indicators and model misspecifications. We conducted two simulation studies to evaluate the accuracy and efficiency of the estimator under these conditions. Results suggest that BCFS estimation of MSEMs is often more dependable, more efficient, and less biased than other estimation approaches when sample sizes are limited or model misspecifications are present but is more susceptible to indicator non-normality. These results support, supplement, and elucidate previous literature describing the effective performance of BCFS estimation encouraging its utilization as an alternative or supplemental estimator for MSEMs.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 1","pages":"48-72"},"PeriodicalIF":2.7,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9806522/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10489718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hope O Akaeze, Frank R Lawrence, Jamie Heng-Chieh Wu
{"title":"Resolving Dimensionality in a Child Assessment Tool: An Application of the Multilevel Bifactor Model.","authors":"Hope O Akaeze, Frank R Lawrence, Jamie Heng-Chieh Wu","doi":"10.1177/00131644221082688","DOIUrl":"10.1177/00131644221082688","url":null,"abstract":"<p><p>Multidimensionality and hierarchical data structure are common in assessment data. These design features, if not accounted for, can threaten the validity of the results and inferences generated from factor analysis, a method frequently employed to assess test dimensionality. In this article, we describe and demonstrate the application of the multilevel bifactor model to address these features in examining test dimensionality. The tool for this exposition is the Child Observation Record Advantage 1.5 (COR-Adv1.5), a child assessment instrument widely used in Head Start programs. Previous studies on this assessment tool reported highly correlated factors and did not account for the nesting of children in classrooms. Results from this study show how the flexibility of the multilevel bifactor model, together with useful model-based statistics, can be harnessed to judge the dimensionality of a test instrument and inform the interpretability of the associated factor scores.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 1","pages":"93-115"},"PeriodicalIF":2.7,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9806520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10494318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Power Analysis for Moderator Effects in Longitudinal Cluster Randomized Designs.","authors":"Wei Li, Spyros Konstantopoulos","doi":"10.1177/00131644221077359","DOIUrl":"10.1177/00131644221077359","url":null,"abstract":"<p><p>Cluster randomized control trials often incorporate a longitudinal component where, for example, students are followed over time and student outcomes are measured repeatedly. Besides examining how intervention effects induce changes in outcomes, researchers are sometimes also interested in exploring whether intervention effects on outcomes are modified by moderator variables at the individual (e.g., gender, race/ethnicity) and/or the cluster level (e.g., school urbanicity) over time. This study provides methods for statistical power analysis of moderator effects in two- and three-level longitudinal cluster randomized designs. Power computations take into account clustering effects, the number of measurement occasions, the impact of sample sizes at different levels, covariates effects, and the variance of the moderator variable. Illustrative examples are offered to demonstrate the applicability of the methods.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 1","pages":"116-145"},"PeriodicalIF":2.7,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9806516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10489266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance of Coefficient Alpha and Its Alternatives: Effects of Different Types of Non-Normality.","authors":"Leifeng Xiao, Kit-Tai Hau","doi":"10.1177/00131644221088240","DOIUrl":"10.1177/00131644221088240","url":null,"abstract":"<p><p>We examined the performance of coefficient alpha and its potential competitors (ordinal alpha, omega total, Revelle's omega total [omega RT], omega hierarchical [omega h], greatest lower bound [GLB], and coefficient <i>H</i>) with continuous and discrete data having different types of non-normality. Results showed the estimation bias was acceptable for continuous data with varying degrees of non-normality when the scales were strong (high loadings). This bias, however, became quite large with moderate strength scales and increased with increasing non-normality. For Likert-type scales, other than omega h, most indices were acceptable with non-normal data having at least four points, and more points were better. For different exponential distributed data, omega RT and GLB were robust, whereas the bias of other indices for binomial-beta distribution was generally large. An examination of an authentic large-scale international survey suggested that its items were at worst moderately non-normal; hence, non-normality was not a big concern. We recommend (a) the demand for continuous and normally distributed data for alpha may not be necessary for less severely non-normal data; (b) for severely non-normal data, we should have at least four scale points, and more points are better; and (c) there is no single golden standard for all data types, other issues such as scale loading, model structure, or scale length are also important.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"83 1","pages":"5-27"},"PeriodicalIF":2.7,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9806521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10489719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tenko Raykov, Christine DiStefano, Lisa Calvocoressi, Martin Volker
{"title":"On Effect Size Measures for Nested Measurement Models.","authors":"Tenko Raykov, Christine DiStefano, Lisa Calvocoressi, Martin Volker","doi":"10.1177/00131644211066845","DOIUrl":"10.1177/00131644211066845","url":null,"abstract":"<p><p>A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free from an explicit relationship to sample size. The described indices represent the extent to which respective linear combinations of the proportions of explained variance in the manifest variables are changed as a result of introducing the constraints. The indices reflect corresponding aspects of the impact of the restrictions and are independent of their statistical significance or lack thereof. The discussed effect size measures are readily point and interval estimated, using popular software, and their application is illustrated with numerical examples.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"82 6","pages":"1225-1246"},"PeriodicalIF":2.1,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9619317/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10840615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resting-State Functional MRI Adaptation with Attention Graph Convolution Network for Brain Disorder Identification.","authors":"Ying Chu, Haonan Ren, Lishan Qiao, Mingxia Liu","doi":"10.3390/brainsci12101413","DOIUrl":"10.3390/brainsci12101413","url":null,"abstract":"<p><p>Multi-site resting-state functional magnetic resonance imaging (rs-fMRI) data can facilitate learning-based approaches to train reliable models on more data. However, significant data heterogeneity between imaging sites, caused by different scanners or protocols, can negatively impact the generalization ability of learned models. In addition, previous studies have shown that graph convolution neural networks (GCNs) are effective in mining fMRI biomarkers. However, they generally ignore the potentially different contributions of brain regions- of-interest (ROIs) to automated disease diagnosis/prognosis. In this work, we propose a multi-site rs-fMRI adaptation framework with attention GCN (A<sup>2</sup>GCN) for brain disorder identification. Specifically, the proposed A<sup>2</sup>GCN consists of three major components: (1) a node representation learning module based on GCN to extract rs-fMRI features from functional connectivity networks, (2) a node attention mechanism module to capture the contributions of ROIs, and (3) a domain adaptation module to alleviate the differences in data distribution between sites through the constraint of mean absolute error and covariance. The A<sup>2</sup>GCN not only reduces data heterogeneity across sites, but also improves the interpretability of the learning algorithm by exploring important ROIs. Experimental results on the public ABIDE database demonstrate that our method achieves remarkable performance in fMRI-based recognition of autism spectrum disorders.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"23 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9599902/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86831644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-iterative Conditional Pairwise Estimation for the Rating Scale Model.","authors":"Mark Elliott, Paula Buttery","doi":"10.1177/00131644211046253","DOIUrl":"10.1177/00131644211046253","url":null,"abstract":"<p><p>We investigate two non-iterative estimation procedures for Rasch models, the pair-wise estimation procedure (PAIR) and the Eigenvector method (EVM), and identify theoretical issues with EVM for rating scale model (RSM) threshold estimation. We develop a new procedure to resolve these issues-the conditional pairwise adjacent thresholds procedure (CPAT)-and test the methods using a large number of simulated datasets to compare the estimates against known generating parameters. We find support for our hypotheses, in particular that EVM threshold estimates suffer from theoretical issues which lead to biased estimates and that CPAT represents a means of resolving these issues. These findings are both statistically significant (<i>p</i> < .001) and of a large effect size. We conclude that CPAT deserves serious consideration as a conditional, computationally efficient approach to Rasch parameter estimation for the RSM. CPAT has particular potential for use in contexts where computational load may be an issue, such as systems with multiple online algorithms and large test banks with sparse data designs.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"82 5","pages":"989-1019"},"PeriodicalIF":2.1,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/f6/31/10.1177_00131644211046253.PMC9386884.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40626320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Symptom Presence and Symptom Severity as Unique Indicators of Psychopathology: An Application of Multidimensional Zero-Inflated and Hurdle Graded Response Models.","authors":"Brooke E Magnus, Yang Liu","doi":"10.1177/00131644211061820","DOIUrl":"10.1177/00131644211061820","url":null,"abstract":"<p><p>Questionnaires inquiring about psychopathology symptoms often produce data with excess zeros or the equivalent (e.g., none, never, and not at all). This type of zero inflation is especially common in nonclinical samples in which many people do not exhibit psychopathology, and if unaccounted for, can result in biased parameter estimates when fitting latent variable models. In the present research, we adopt a maximum likelihood approach in fitting multidimensional zero-inflated and hurdle graded response models to data from a psychological distress measure. These models include two latent variables: susceptibility, which relates to the probability of endorsing the symptom at all, and severity, which relates to the frequency of the symptom, given its presence. After estimating model parameters, we compute susceptibility and severity scale scores and include them as explanatory variables in modeling health-related criterion measures (e.g., suicide attempts, diagnosis of major depressive disorder). Results indicate that susceptibility and severity uniquely and differentially predict other health outcomes, which suggests that symptom presence and symptom severity are unique indicators of psychopathology and both may be clinically useful. Psychometric and clinical implications are discussed, including scale score reliability.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"82 5","pages":"938-966"},"PeriodicalIF":2.7,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9386878/pdf/10.1177_00131644211061820.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40626321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multilevel Mixture IRT Framework for Modeling Response Times as Predictors or Indicators of Response Engagement in IRT Models.","authors":"Gabriel Nagy, Esther Ulitzsch","doi":"10.1177/00131644211045351","DOIUrl":"https://doi.org/10.1177/00131644211045351","url":null,"abstract":"<p><p>Disengaged item responses pose a threat to the validity of the results provided by large-scale assessments. Several procedures for identifying disengaged responses on the basis of observed response times have been suggested, and item response theory (IRT) models for response engagement have been proposed. We outline that response time-based procedures for classifying response engagement and IRT models for response engagement are based on common ideas, and we propose the distinction between independent and dependent latent class IRT models. In all IRT models considered, response engagement is represented by an item-level latent class variable, but the models assume that response times either reflect or predict engagement. We summarize existing IRT models that belong to each group and extend them to increase their flexibility. Furthermore, we propose a flexible multilevel mixture IRT framework in which all IRT models can be estimated by means of marginal maximum likelihood. The framework is based on the widespread Mplus software, thereby making the procedure accessible to a broad audience. The procedures are illustrated on the basis of publicly available large-scale data. Our results show that the different IRT models for response engagement provided slightly different adjustments of item parameters of individuals' proficiency estimates relative to a conventional IRT model.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"82 5","pages":"845-879"},"PeriodicalIF":2.7,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/c5/49/10.1177_00131644211045351.PMC9386881.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40628154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploratory Graph Analysis for Factor Retention: Simulation Results for Continuous and Binary Data.","authors":"Tim Cosemans, Yves Rosseel, Sarah Gelper","doi":"10.1177/00131644211059089","DOIUrl":"https://doi.org/10.1177/00131644211059089","url":null,"abstract":"<p><p>Exploratory graph analysis (EGA) is a commonly applied technique intended to help social scientists discover latent variables. Yet, the results can be influenced by the methodological decisions the researcher makes along the way. In this article, we focus on the choice regarding the number of factors to retain: We compare the performance of the recently developed EGA with various traditional factor retention criteria. We use both continuous and binary data, as evidence regarding the accuracy of such criteria in the latter case is scarce. Simulation results, based on scenarios resulting from varying sample size, communalities from major factors, interfactor correlations, skewness, and correlation measure, show that EGA outperforms the traditional factor retention criteria considered in most cases in terms of bias and accuracy. In addition, we show that factor retention decisions for binary data are preferably made using Pearson, instead of tetrachoric, correlations, which is contradictory to popular belief.</p>","PeriodicalId":11502,"journal":{"name":"Educational and Psychological Measurement","volume":"82 5","pages":"880-910"},"PeriodicalIF":2.7,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9386885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40626317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}