Psychological methodsPub Date : 2024-10-01Epub Date: 2022-10-13DOI: 10.1037/met0000534
David Jendryczko, Fridtjof W Nussbeck
{"title":"Estimating and investigating multiple constructs multiple indicators social relations models with and without roles within the traditional structural equation modeling framework: A tutorial.","authors":"David Jendryczko, Fridtjof W Nussbeck","doi":"10.1037/met0000534","DOIUrl":"10.1037/met0000534","url":null,"abstract":"<p><p>The present contribution provides a tutorial for the estimation of the social relations model (SRM) by means of structural equation modeling (SEM). In the overarching SEM-framework, the SRM without roles (with interchangeable dyads) is derived as a more restrictive form of the SRM with roles (with noninterchangeable dyads). Starting with the simplest type of the SRM for one latent construct assessed by one manifest round-robin indicator, we show how the model can be extended to multiple constructs each measured by multiple indicators. We illustrate a multiple constructs multiple indicators SEM SRM both with and without roles with simulated data and explain the parameter interpretations. We present how testing the substantial model assumptions can be disentangled from testing the interchangeability of dyads. Additionally, we point out modeling strategies that adhere to cases in which only some members of a group can be differentiated with regards to their roles (i.e., only some group members are noninterchangeable). In the online supplemental materials, we provide concrete examples of specific modeling problems and their implementation into statistical software (Mplus, lavaan, and OpenMx). Advantages, caveats, possible extensions, and limitations in comparison with alternative modeling options are discussed. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"919-946"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9371931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2022-09-01DOI: 10.1037/met0000516
Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark
{"title":"Updated guidelines on selecting an intraclass correlation coefficient for interrater reliability, with applications to incomplete observational designs.","authors":"Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark","doi":"10.1037/met0000516","DOIUrl":"10.1037/met0000516","url":null,"abstract":"<p><p>Several intraclass correlation coefficients (ICCs) are available to assess the interrater reliability (IRR) of observational measurements. Selecting an ICC is complicated, and existing guidelines have three major limitations. First, they do not discuss incomplete designs, in which raters partially vary across subjects. Second, they provide no coherent perspective on the error variance in an ICC, clouding the choice between the available coefficients. Third, the distinction between fixed or random raters is often misunderstood. Based on generalizability theory (GT), we provide updated guidelines on selecting an ICC for IRR, which are applicable to both complete and incomplete observational designs. We challenge conventional wisdom about ICCs for IRR by claiming that raters should seldom (if ever) be considered fixed. Also, we clarify how to interpret ICCs in the case of unbalanced and incomplete designs. We explain four choices a researcher needs to make when selecting an ICC for IRR, and guide researchers through these choices by means of a flowchart, which we apply to three empirical examples from clinical and developmental domains. In the Discussion, we provide guidance in reporting, interpreting, and estimating ICCs, and propose future directions for research into the ICCs for IRR. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"967-979"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9290331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2022-10-06DOI: 10.1037/met0000530
Kenneth A Bollen, Adam G Lilly, Lan Luo
{"title":"Selecting scaling indicators in structural equation models (sems).","authors":"Kenneth A Bollen, Adam G Lilly, Lan Luo","doi":"10.1037/met0000530","DOIUrl":"10.1037/met0000530","url":null,"abstract":"<p><p>It is common practice for psychologists to specify models with latent variables to represent concepts that are difficult to directly measure. Each latent variable needs a scale, and the most popular method of scaling as well as the default in most structural equation modeling (SEM) software uses a scaling or reference indicator. Much of the time, the choice of which indicator to use for this purpose receives little attention, and many analysts use the first indicator without considering whether there are better choices. When all indicators of the latent variable have essentially the same properties, then the choice matters less. But when this is not true, we could benefit from scaling indicator guidelines. Our article first demonstrates why latent variables need a scale. We then propose a set of criteria and accompanying diagnostic tools that can assist researchers in making informed decisions about scaling indicators. The criteria for a good scaling indicator include high face validity, high correlation with the latent variable, factor complexity of one, no correlated errors, no direct effects with other indicators, a minimal number of significant overidentification equation tests and modification indices, and invariance across groups and time. We demonstrate these criteria and diagnostics using two empirical examples and provide guidance on navigating conflicting results among criteria. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"868-889"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10275390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9650749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiple imputation of missing data in large studies with many variables: A fully conditional specification approach using partial least squares.","authors":"Simon Grund, Oliver Lüdtke, Alexander Robitzsch","doi":"10.1037/met0000694","DOIUrl":"https://doi.org/10.1037/met0000694","url":null,"abstract":"<p><p>Multiple imputation (MI) is one of the most popular methods for handling missing data in psychological research. However, many imputation approaches are poorly equipped to handle a large number of variables, which are a common sight in studies that employ questionnaires to assess psychological constructs. In such a case, conventional imputation approaches often become unstable and require that the imputation model be simplified, for example, by removing variables or combining them into composite scores. In this article, we propose an alternative method that extends the fully conditional specification approach to MI with dimension reduction techniques such as partial least squares. To evaluate this approach, we conducted a series of simulation studies, in which we compared it with other approaches that were based on variable selection, composite scores, or dimension reduction through principal components analysis. Our findings indicate that this novel approach can provide accurate results even in challenging scenarios, where other approaches fail to do so. Finally, we also illustrate the use of this method in real data and discuss the implications of our findings for practice. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142352684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian estimation and comparison of idiographic network models.","authors":"Björn S Siepe, Matthias Kloft, Daniel W Heck","doi":"10.1037/met0000672","DOIUrl":"https://doi.org/10.1037/met0000672","url":null,"abstract":"<p><p>Idiographic network models are estimated on time series data of a single individual and allow researchers to investigate person-specific associations between multiple variables over time. The most common approach for fitting graphical vector autoregressive (GVAR) models uses least absolute shrinkage and selection operator (LASSO) regularization to estimate a contemporaneous and a temporal network. However, estimation of idiographic networks can be unstable in relatively small data sets typical for psychological research. This bears the risk of misinterpreting differences in estimated networks as spurious heterogeneity between individuals. As a remedy, we evaluate the performance of a Bayesian alternative for fitting GVAR models that allows for regularization of parameters while accounting for estimation uncertainty. We also develop a novel test, implemented in the tsnet package in R, which assesses whether differences between estimated networks are reliable based on matrix norms. We first compare Bayesian and LASSO approaches across a range of conditions in a simulation study. Overall, LASSO estimation performs well, while a Bayesian GVAR without edge selection may perform better when the true network is dense. In an additional simulation study, the novel test is conservative and shows good false-positive rates. Finally, we apply Bayesian estimation and testing in an empirical example using daily data on clinical symptoms for 40 individuals. We additionally provide functionality to estimate Bayesian GVAR models in Stan within tsnet. Overall, Bayesian GVAR modeling facilitates the assessment of estimation uncertainty which is important for studying interindividual differences of intraindividual dynamics. In doing so, the novel test serves as a safeguard against premature conclusions of heterogeneity. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142352683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto Maydeu-Olivares,Carmen Ximénez,Javier Revuelta
{"title":"Percentage of variance accounted for in structural equation models: The rediscovery of the goodness of fit index.","authors":"Alberto Maydeu-Olivares,Carmen Ximénez,Javier Revuelta","doi":"10.1037/met0000680","DOIUrl":"https://doi.org/10.1037/met0000680","url":null,"abstract":"This article delves into the often-overlooked metric of percentage of variance accounted for in structural equation models (SEM). The goodness of fit index (GFI) provides the percentage of variance of the sum of squared covariances explained by the model. Despite being introduced over four decades ago, the GFI has been overshadowed in favor of fit indices that prioritize distinctions between close and nonclose fitting models. Similar to R² in regression, the GFI should not be used to this aim but rather to quantify the model's utility. The central aim of this study is to reintroduce the GFI, introducing a novel approach to computing the GFI using mean and mean-and-variance corrected test statistics, specifically designed for nonnormal data. We use an extensive simulation study to evaluate the precision of inferences on the GFI, including point estimates and confidence intervals. The findings demonstrate that the GFI can be very accurately estimated, even with nonnormal data, and that confidence intervals exhibit reasonable accuracy across diverse conditions, including large models and nonnormal data scenarios. The article provides methods and code for estimating the GFI in any SEM, urging researchers to reconsider the reporting of the percentage of variance accounted for as an essential tool for model assessment and selection. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"24 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A computationally efficient and robust method to estimate exploratory factor analysis models with correlated residuals.","authors":"Guangjian Zhang, Dayoung Lee","doi":"10.1037/met0000609","DOIUrl":"https://doi.org/10.1037/met0000609","url":null,"abstract":"<p><p>A critical assumption in exploratory factor analysis (EFA) is that manifest variables are no longer correlated after the influences of the common factors are controlled. The assumption may not be valid in some EFA applications; for example, questionnaire items share other characteristics in addition to their relations to common factors. We present a computationally efficient and robust method to estimate EFA with correlated residuals. We provide details on the implementation of the method with both ordinary least squares estimation and maximum likelihood estimation. We demonstrate the method using empirical data and conduct a simulation study to explore its statistical properties. The results are (a) that the new method encountered much fewer convergence problems than the existing method; (b) that the EFA model with correlated residuals produced a more satisfactory model fit than the conventional EFA model; and (c) that the EFA model with correlated residuals and the conventional EFA model produced very similar estimates for factor loadings. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142293947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alina Herderich, Heribert H Freudenthaler, David Garcia
{"title":"A computational method to reveal psychological constructs from text data.","authors":"Alina Herderich, Heribert H Freudenthaler, David Garcia","doi":"10.1037/met0000700","DOIUrl":"https://doi.org/10.1037/met0000700","url":null,"abstract":"<p><p>When starting to formalize psychological constructs, researchers traditionally rely on two distinct approaches: the quantitative approach, which defines constructs as part of a testable theory based on prior research and domain knowledge often deploying self-report questionnaires, or the qualitative approach, which gathers data mostly in the form of text and bases construct definitions on exploratory analyses. Quantitative research might lead to an incomplete understanding of the construct, while qualitative research is limited due to challenges in the systematic data processing, especially at large scale. We present a new computational method that combines the comprehensiveness of qualitative research and the scalability of quantitative analyses to define psychological constructs from semistructured text data. Based on structured questions, participants are prompted to generate sentences reflecting instances of the construct of interest. We apply computational methods to calculate embeddings as numerical representations of the sentences, which we then run through a clustering algorithm to arrive at groupings of sentences as psychologically relevant classes. The method includes steps for the measurement and correction of bias introduced by the data generation, and the assessment of cluster validity according to human judgment. We demonstrate the applicability of our method on an example from emotion regulation. Based on short descriptions of emotion regulation attempts collected through an open-ended situational judgment test, we use our method to derive classes of emotion regulation strategies. Our approach shows how machine learning and psychology can be combined to provide new perspectives on the conceptualization of psychological processes. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142293946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-lagged panel modeling with binary and ordinal outcomes.","authors":"Bengt Muthén, Tihomir Asparouhov, Katie Witkiewitz","doi":"10.1037/met0000701","DOIUrl":"https://doi.org/10.1037/met0000701","url":null,"abstract":"<p><p>To date, cross-lagged panel modeling has been studied only for continuous outcomes. This article presents methods that are suitable also when there are binary and ordinal outcomes. Modeling, testing, identification, and estimation are discussed. A two-part ordinal model is proposed for ordinal variables with strong floor effects often seen in applications. An example considers the interaction between stress and alcohol use in an alcohol treatment study. Extensions to multiple-group analysis and modeling in the presence of trends are discussed. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142293948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Thinking clearly about time-invariant confounders in cross-lagged panel models: A guide for choosing a statistical model from a causal inference perspective.","authors":"Kou Murayama, Thomas Gfrörer","doi":"10.1037/met0000647","DOIUrl":"https://doi.org/10.1037/met0000647","url":null,"abstract":"<p><p>Many statistical models have been proposed to examine reciprocal cross-lagged causal effects from panel data. The present article aims to clarify how these various statistical models control for unmeasured time-invariant confounders, helping researchers understand the differences in the statistical models from a causal inference perspective. Assuming that the true data generation model (i.e., causal model) has time-invariant confounders that were not measured, we compared different statistical models (e.g., dynamic panel model and random-intercept cross-lagged panel model) in terms of the conditions under which they can provide a relatively accurate estimate of the target causal estimand. Based on the comparisons and realistic plausibility of these conditions, we made some practical suggestions for researchers to select a statistical model when they are interested in causal inference. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142293952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}