{"title":"Path and Direction Discovery in Individual Dynamic Factor Models: A Regularized Hybrid Unified Structural Equation Modeling with Latent Variable.","authors":"Ai Ye, Kenneth A Bollen","doi":"10.1080/00273171.2024.2354232","DOIUrl":"10.1080/00273171.2024.2354232","url":null,"abstract":"<p><p>There has been an increasing call to model multivariate time series data with measurement error. The combination of latent factors with a vector autoregressive (VAR) model leads to the dynamic factor model (DFM), in which dynamic relations are derived within factor series, among factors and observed time series, or both. However, a few limitations exist in the current DFM representatives and estimation: (1) the dynamic component contains either directed or undirected contemporaneous relations, but not both, (2) selecting the optimal model in exploratory DFM is a challenge, (3) the consequences of structural misspecifications from model selection is barely studied. Our paper serves to advance DFM with a hybrid VAR representations and the utilization of LASSO regularization to select dynamic implied instrumental variable, two-stage least squares (MIIV-2SLS) estimation. Our proposed method highlights the flexibility in modeling the directions of dynamic relations with a robust estimation. We aim to offer researchers guidance on model selection and estimation in person-centered dynamic assessments.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1019-1042"},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parametric g-formula for Testing Time-Varying Causal Effects: What It Is, Why It Matters, and How to Implement It in Lavaan.","authors":"Wen Wei Loh, Dongning Ren, Stephen G West","doi":"10.1080/00273171.2024.2354228","DOIUrl":"10.1080/00273171.2024.2354228","url":null,"abstract":"<p><p>Psychologists leverage longitudinal designs to examine the causal effects of a focal predictor (i.e., treatment or exposure) over time. But causal inference of naturally observed time-varying treatments is complicated by treatment-dependent confounding in which earlier treatments affect confounders of later treatments. In this tutorial article, we introduce psychologists to an established solution to this problem from the causal inference literature: the parametric g-computation formula. We explain why the g-formula is effective at handling treatment-dependent confounding. We demonstrate that the parametric g-formula is conceptually intuitive, easy to implement, and well-suited for psychological research. We first clarify that the parametric g-formula essentially utilizes a series of statistical models to estimate the joint distribution of all post-treatment variables. These statistical models can be readily specified as standard multiple linear regression functions. We leverage this insight to implement the parametric g-formula using lavaan, a widely adopted R package for structural equation modeling. Moreover, we describe how the parametric g-formula may be used to estimate a marginal structural model whose causal parameters parsimoniously encode time-varying treatment effects. We hope this accessible introduction to the parametric g-formula will equip psychologists with an analytic tool to address their causal inquiries using longitudinal data.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"995-1018"},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141499613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julian D Karch, Andres F Perez-Alonso, Wicher P Bergsma
{"title":"Beyond Pearson's Correlation: Modern Nonparametric Independence Tests for Psychological Research.","authors":"Julian D Karch, Andres F Perez-Alonso, Wicher P Bergsma","doi":"10.1080/00273171.2024.2347960","DOIUrl":"10.1080/00273171.2024.2347960","url":null,"abstract":"<p><p>When examining whether two continuous variables are associated, tests based on Pearson's, Kendall's, and Spearman's correlation coefficients are typically used. This paper explores modern nonparametric independence tests as an alternative, which, unlike traditional tests, have the ability to potentially detect any type of relationship. In addition to existing modern nonparametric independence tests, we developed and considered two novel variants of existing tests, most notably the Heller-Heller-Gorfine-Pearson (HHG-Pearson) test. We conducted a simulation study to compare traditional independence tests, such as Pearson's correlation, and the modern nonparametric independence tests in situations commonly encountered in psychological research. As expected, no test had the highest power across all relationships. However, the distance correlation and the HHG-Pearson tests were found to have substantially greater power than all traditional tests for many relationships and only slightly less power in the worst case. A similar pattern was found in favor of the HHG-Pearson test compared to the distance correlation test. However, given that distance correlation performed better for linear relationships and is more widely accepted, we suggest considering its use in place or additional to traditional methods when there is no prior knowledge of the relationship type, as is often the case in psychological research.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"957-977"},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear Mixed-Effects Models for Dependent Data: Power and Accuracy in Parameter Estimation.","authors":"Yue Liu, Kit-Tai Hau, Hongyun Liu","doi":"10.1080/00273171.2024.2350236","DOIUrl":"10.1080/00273171.2024.2350236","url":null,"abstract":"<p><p>Linear mixed-effects models have been increasingly used to analyze dependent data in psychological research. Despite their many advantages over ANOVA, critical issues in their analyses remain. Due to increasing random effects and model complexity, estimation computation is demanding, and convergence becomes challenging. Applied users need help choosing appropriate methods to estimate random effects. The present Monte Carlo simulation study investigated the impacts when the restricted maximum likelihood (REML) and Bayesian estimation models were misspecified in the estimation. We also compared the performance of Akaike information criterion (AIC) and deviance information criterion (DIC) in model selection. Results showed that models neglecting the existing random effects had inflated Type I errors, unacceptable coverage, and inaccurate <i>R</i>-squared measures of fixed and random effects variation. Furthermore, models with redundant random effects had convergence problems, lower statistical power, and inaccurate <i>R</i>-squared measures for Bayesian estimation. The convergence problem is more severe for REML, while reduced power and inaccurate <i>R</i>-squared measures were more severe for Bayesian estimation. Notably, DIC was better than AIC in identifying the true models (especially for models including person random intercept only), improving convergence rates, and providing more accurate effect size estimates, despite AIC having higher power than DIC with 10 items and the most complicated true model.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"978-994"},"PeriodicalIF":5.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhaojun Li, Lingyue Li, Bo Zhang, Mengyang Cao, Louis Tay
{"title":"Killing Two Birds with One Stone: Accounting for Unfolding Item Response Process and Response Styles Using Unfolding Item Response Tree Models.","authors":"Zhaojun Li, Lingyue Li, Bo Zhang, Mengyang Cao, Louis Tay","doi":"10.1080/00273171.2024.2394607","DOIUrl":"https://doi.org/10.1080/00273171.2024.2394607","url":null,"abstract":"<p><p>Two research streams on responses to Likert-type items have been developing in parallel: (a) unfolding models and (b) individual response styles (RSs). To accurately understand Likert-type item responding, it is vital to parse unfolding responses from RSs. Therefore, we propose the Unfolding Item Response Tree (UIRTree) model. First, we conducted a Monte Carlo simulation study to examine the performance of the UIRTree model compared to three other models - Samejima's Graded Response Model, Generalized Graded Unfolding Model, and Dominance Item Response Tree model, for Likert-type responses. Results showed that when data followed an unfolding response process and contained RSs, AIC was able to select the UIRTree model, while BIC was biased toward the DIRTree model in many conditions. In addition, model parameters in the UIRTree model could be accurately recovered under realistic conditions, and mis-specifying item response process or wrongly ignoring RSs was detrimental to the estimation of key parameters. Then, we used datasets from empirical studies to show that the UIRTree model could fit personality datasets well and produced more reasonable parameter estimates compared to competing models. A strong presence of RS(s) was also revealed by the UIRTree model. Finally, we provided examples with <i>R</i> code for UIRTree model estimation to facilitate the modeling of responses to Likert-type items in future studies.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-23"},"PeriodicalIF":5.3,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142114707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equivalence Testing Based Fit Index: Standardized Root Mean Squared Residual.","authors":"Nataly Beribisky, Robert A Cribbie","doi":"10.1080/00273171.2024.2386686","DOIUrl":"https://doi.org/10.1080/00273171.2024.2386686","url":null,"abstract":"<p><p>A popular measure of model fit in structural equation modeling (SEM) is the standardized root mean squared residual (SRMR) fit index. Equivalence testing has been used to evaluate model fit in structural equation modeling (SEM) but has yet to be applied to SRMR. Accordingly, the present study proposed equivalence-testing based fit tests for the SRMR (ESRMR). Several variations of ESRMR were introduced, incorporating different equivalence bounds and methods of computing confidence intervals. A Monte Carlo simulation study compared these novel tests with traditional methods for evaluating model fit. The results demonstrated that certain ESRMR tests based on an analytic computation of the confidence interval correctly reject poor-fitting models and are well-powered for detecting good-fitting models. We also present an illustrative example with real data to demonstrate how ESRMR may be incorporated into model fit evaluation and reporting. Our recommendation is that ESRMR tests be presented in addition to descriptive fit indices for model fit reporting in SEM.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-20"},"PeriodicalIF":5.3,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141996927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Latent Reciprocal Engagement and Accuracy Variables in Social Relations Structural Equation Modeling.","authors":"David Jendryczko, Fridtjof W Nussbeck","doi":"10.1080/00273171.2024.2386060","DOIUrl":"https://doi.org/10.1080/00273171.2024.2386060","url":null,"abstract":"<p><p>The social relations model (SRM) is the standard approach for analyzing dyadic data stemming from round-robin designs. The model can be used to estimate correlation-coefficients that reflect the overall reciprocity or accuracy of judgements for individual and dyads on the sample- or population level. Within the social relations structural equation modeling framework and on the statistical grounding of stochastic measurement and classical test theory, we show how the multiple indicator SRM can be modified to capture inter-individual and inter-dyadic differences in reciprocal engagement or inter-individual differences in reciprocal accuracy. All models are illustrated on an open-access round-robin data set containing measures of mimicry, liking, and meta-liking (the belief to be liked). Results suggest that people who engage more strongly in reciprocal mimicry are liked more after an interaction with someone and that overestimating one's own popularity is strongly associated with being liked less. Further applications, advantages and limitations of the models are discussed.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-23"},"PeriodicalIF":5.3,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Clustering Individuals Based on Similarity in Idiographic Factor Loading Patterns.","authors":"Cara J Arizmendi, Kathleen M Gates","doi":"10.1080/00273171.2024.2374826","DOIUrl":"10.1080/00273171.2024.2374826","url":null,"abstract":"<p><p>Idiographic measurement models such as p-technique and dynamic factor analysis (DFA) assess latent constructs at the individual level. These person-specific methods may provide more accurate models than models obtained from aggregated data when individuals are heterogeneous in their processes. Developing clustering methods for the grouping of individuals with similar measurement models would enable researchers to identify if measurement model subtypes exist across individuals as well as assess if the different models correspond to the same latent concept or not. In this paper, methods for clustering individuals based on similarity in measurement model loadings obtained from time series data are proposed. We review literature on idiographic factor modeling and measurement invariance, as well as clustering for time series analysis. Through two studies, we explore the utility and effectiveness of these measures. In <b>Study 1</b>, a simulation study is conducted, demonstrating the recovery of groups generated to have differing factor loadings using the proposed clustering method. In <b>Study 2</b>, an extension of Study 1 to DFA is presented with a simulation study. Overall, we found good recovery of simulated clusters and provide an example demonstrating the method with empirical data.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-25"},"PeriodicalIF":5.3,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754526/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Causal Latent Class Analysis with Distal Outcomes: A Modified Three-Step Method Using Inverse Propensity Weighting.","authors":"Trà T Lê, Felix J Clouth, Jeroen K Vermunt","doi":"10.1080/00273171.2024.2367485","DOIUrl":"https://doi.org/10.1080/00273171.2024.2367485","url":null,"abstract":"<p><p>Bias-adjusted three-step latent class (LC) analysis is a popular technique for estimating the relationship between LC membership and distal outcomes. Since it is impossible to randomize LC membership, causal inference techniques are needed to estimate causal effects leveraging observational data. This paper proposes two novel strategies that make use of propensity scores to estimate the causal effect of LC membership on a distal outcome variable. Both strategies modify the bias-adjusted three-step approach by using propensity scores in the last step to control for confounding. The first strategy utilizes inverse propensity weighting (IPW), whereas the second strategy includes the propensity scores as control variables. Classification errors are accounted for using the BCH or ML corrections. We evaluate the performance of these methods in a simulation study by comparing it with three existing approaches that also use propensity scores in a stepwise LC analysis. Both of our newly proposed methods return essentially unbiased parameter estimates outperforming previously proposed methods. However, for smaller sample sizes our IPW based approach shows large variability in the estimates and can be prone to non-convergence. Furthermore, the use of these newly proposed methods is illustrated using data from the LISS panel.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-31"},"PeriodicalIF":5.3,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141735647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanling Li, Zita Oravecz, Linying Ji, Sy-Miin Chow
{"title":"Multiple Imputation with Factor Scores: A Practical Approach for Handling Simultaneous Missingness Across Items in Longitudinal Designs.","authors":"Yanling Li, Zita Oravecz, Linying Ji, Sy-Miin Chow","doi":"10.1080/00273171.2024.2371816","DOIUrl":"10.1080/00273171.2024.2371816","url":null,"abstract":"<p><p>Missingness in intensive longitudinal data triggered by latent factors constitute one type of nonignorable missingness that can generate simultaneous missingness across multiple items on each measurement occasion. To address this issue, we propose a multiple imputation (MI) strategy called MI-FS, which incorporates factor scores, lag/lead variables, and missing data indicators into the imputation model. In the context of process factor analysis (PFA), we conducted a Monte Carlo simulation study to compare the performance of MI-FS to listwise deletion (LD), MI with manifest variables (MI-MV, which implements MI on both dependent variables and covariates), and partial MI with MVs (PMI-MV, which implements MI on covariates and handles missing dependent variables <i>via</i> full-information maximum likelihood) under different conditions. Across conditions, we found MI-based methods overall outperformed the LD; the MI-FS approach yielded lower root mean square errors (RMSEs) and higher coverage rates for auto-regression (AR) parameters compared to MI-MV; and the PMI-MV and MI-MV approaches yielded higher coverage rates for most parameters except AR parameters compared to MI-FS. These approaches were also compared using an empirical example investigating the relationships between negative affect and perceived stress over time. Recommendations on when and how to incorporate factor scores into MI processes were discussed.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-29"},"PeriodicalIF":5.3,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}