Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark
{"title":"Interrater Reliability for Interdependent Social Network Data: A Generalizability Theory Approach.","authors":"Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark","doi":"10.1080/00273171.2024.2444940","DOIUrl":"https://doi.org/10.1080/00273171.2024.2444940","url":null,"abstract":"<p><p>We propose interrater reliability coefficients for observational interdependent social network data, which are dyadic data from a network of interacting subjects that are observed by external raters. Using the social relations model, dyadic scores of subjects' behaviors during these interactions can be decomposed into actor, partner, and relationship effects. These effects constitute different facets of theoretical interest about which researchers formulate research questions. Based on generalizability theory, we extended the social relations model with rater effects, resulting in a model that decomposes the variance of dyadic observational data into effects of actors, partners, relationships, raters, and their statistical interactions. We used the variances of these effects to define intraclass correlation coefficients (ICCs) that indicate the extent the actor, partner, and relationship effects can be generalized across external raters. We proposed Markov chain Monte Carlo estimation of a Bayesian hierarchical linear model to estimate the ICCs, and tested their bias and coverage in a simulation study. The method is illustrated using data on social mimicry.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-16"},"PeriodicalIF":5.3,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K B S Huth, B DeLong, L Waldorp, M Marsman, M Rhemtulla
{"title":"Nodewise Parameter Aggregation for Psychometric Networks.","authors":"K B S Huth, B DeLong, L Waldorp, M Marsman, M Rhemtulla","doi":"10.1080/00273171.2025.2450648","DOIUrl":"https://doi.org/10.1080/00273171.2025.2450648","url":null,"abstract":"<p><p>Psychometric networks can be estimated using nodewise regression to estimate edge weights when the joint distribution is analytically difficult to derive or the estimation is too computationally intensive. The nodewise approach runs generalized linear models with each node as the outcome. Two regression coefficients are obtained for each link, which need to be aggregated to obtain the edge weight (i.e., the conditional association). The nodewise approach has been shown to reveal the true graph structure. However, for continuous variables, the regression coefficients are scaled differently than the partial correlations, and therefore the nodewise approach may lead to different edge weights. Here, the aggregation of the two regression coefficients is crucial in obtaining the true partial correlation. We show that when the correlations of the two predictors with the control variables are different, averaging the regression coefficients leads to an asymptotically biased estimator of the partial correlation. This is likely to occur when a variable has a high correlation with other nodes in the network (e.g., variables in the same domain) and a lower correlation with another node (e.g., variables in a different domain). We discuss two different ways of aggregating the regression weights, which can obtain the true partial correlation: first, multiplying the weights and taking their square root, and second, rescaling the regression weight by the residual variances. The two latter estimators can recover the true network structure and edge weights.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-9"},"PeriodicalIF":5.3,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimated Factor Scores Are Not True Factor Scores.","authors":"Mijke Rhemtulla, Victoria Savalei","doi":"10.1080/00273171.2024.2444943","DOIUrl":"https://doi.org/10.1080/00273171.2024.2444943","url":null,"abstract":"<p><p>In this tutorial, we clarify the distinction between estimated factor scores, which are weighted composites of observed variables, and true factor scores, which are unobservable values of the underlying latent variable. Using an analogy with linear regression, we show how predicted values in linear regression share the properties of the most common type of factor score estimates, regression factor scores, computed from single-indicator and multiple indicator latent variable models. Using simulated data from 1- and 2-factor models, we also show how the amount of measurement error affects the reliability of regression factor scores, and compare the performance of regression factor scores with that of unweighted sum scores.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-22"},"PeriodicalIF":5.3,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James Soland, Veronica Cole, Stephen Tavares, Qilin Zhang
{"title":"Evidence That Growth Mixture Model Results Are Highly Sensitive to Scoring Decisions.","authors":"James Soland, Veronica Cole, Stephen Tavares, Qilin Zhang","doi":"10.1080/00273171.2024.2444955","DOIUrl":"https://doi.org/10.1080/00273171.2024.2444955","url":null,"abstract":"<p><p>Interest in identifying latent growth profiles to support the psychological and social-emotional development of individuals has translated into the widespread use of growth mixture models (GMMs). In most cases, GMMs are based on scores from item responses collected using survey scales or other measures. Research already shows that GMMs can be sensitive to departures from ideal modeling conditions and that growth model results outside of GMMs are sensitive to decisions about how item responses are scored, but the impact of scoring decisions on GMMs has never been investigated. We start to close that gap in the literature with the current study. Through empirical and Monte Carlo studies, we show that GMM results-including convergence, class enumeration, and latent growth trajectories within class-are extremely sensitive to seemingly arcane measurement decisions. Further, our results make clear that, because GMM latent classes are not known a priori, measurement models used to produce scores for use in GMMs are, almost by definition, misspecified because they cannot account for group membership. Misspecification of the measurement model then, in turn, biases GMM results. Practical implications of these results are discussed. Our findings raise serious concerns that many results in the current GMM literature may be driven, in part or whole, by measurement artifacts rather than substantive differences in developmental trends.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-22"},"PeriodicalIF":5.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non-Stationarity in Time-Series Analysis: Modeling Stochastic and Deterministic Trends.","authors":"Oisín Ryan, Jonas M B Haslbeck, Lourens J Waldorp","doi":"10.1080/00273171.2024.2436413","DOIUrl":"https://doi.org/10.1080/00273171.2024.2436413","url":null,"abstract":"<p><p>Time series analysis is increasingly popular across scientific domains. A key concept in time series analysis is stationarity, the stability of statistical properties of a time series. Understanding stationarity is crucial to addressing frequent issues in time series analysis such as the consequences of failing to model non-stationarity, how to determine the mechanisms generating non-stationarity, and consequently how to model those mechanisms (i.e., by differencing or detrending). However, many empirical researchers have a limited understanding of stationarity, which can lead to the use of incorrect research practices and misleading substantive conclusions. In this paper, we address this problem by answering these questions in an accessible way. To this end, we study how researchers can use detrending and differencing to model trends in time series analysis. We show <i>via</i> simulation the consequences of modeling trends inappropriately, and evaluate the performance of one popular approach to distinguish different trend types in empirical data. We present these results in an accessible way, providing an extensive introduction to key concepts in time series analysis, illustrated throughout with simple examples. Finally, we discuss a number of take-home messages and extensions to standard approaches, which directly address more complex time-series analysis problems encountered by empirical researchers.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-33"},"PeriodicalIF":5.3,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Causal Estimands and Multiply Robust Estimation of Mediated-Moderation.","authors":"Xiao Liu, Mark Eddy, Charles R Martinez","doi":"10.1080/00273171.2024.2444949","DOIUrl":"https://doi.org/10.1080/00273171.2024.2444949","url":null,"abstract":"<p><p>When studying effect heterogeneity between different subgroups (i.e., moderation), researchers are frequently interested in the mediation mechanisms underlying the heterogeneity, that is, the mediated moderation. For assessing mediated moderation, conventional methods typically require parametric models to define mediated moderation, which has limitations when parametric models may be misspecified and when causal interpretation is of interest. For causal interpretations about mediation, causal mediation analysis is increasingly popular but is underdeveloped for mediated moderation analysis. In this study, we extend the causal mediation literature, and we propose a novel method for mediated moderation analysis. Using the potential outcomes framework, we obtain two causal estimands that decompose the total moderation: (i) the mediated moderation attributable to a mediator and (ii) the remaining moderation unattributable to the mediator. We also develop a multiply robust estimation method for the mediated moderation analysis, which can incorporate machine learning methods in the inference of the causal estimands. We evaluate the proposed method through simulations. We illustrate the proposed mediated moderation analysis by assessing the mediation mechanism that underlies the gender difference in the effect of a preventive intervention on adolescent behavioral outcomes.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-27"},"PeriodicalIF":5.3,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142973231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MIIVefa: An R Package for a New Type of Exploratory Factor Anaylysis Using Model-Implied Instrumental Variables.","authors":"Lan Luo, Kathleen M Gates, Kenneth A Bollen","doi":"10.1080/00273171.2024.2436418","DOIUrl":"https://doi.org/10.1080/00273171.2024.2436418","url":null,"abstract":"<p><p>We present the R package MIIVefa, designed to implement the MIIV-EFA algorithm. This algorithm explores and identifies the underlying factor structure within a set of variables. The resulting model is not a typical exploratory factor analysis (EFA) model because some loadings are fixed to zero and it allows users to include hypothesized correlated errors such as might occur with longitudinal data. As such, it resembles a confirmatory factor analysis (CFA) model. But, unlike CFA, the MIIV-EFA algorithm determines the number of factors and the items that load on these factors directly from the data. We provide both simulation and empirical examples to illustrate the application of MIIVefa and discuss its benefits and limitations.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-9"},"PeriodicalIF":5.3,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142900361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Latent Structure of Responses and Response Times from Multidimensional Personality Measurement with Ordinal Rating Scales.","authors":"Inhan Kang","doi":"10.1080/00273171.2024.2436406","DOIUrl":"https://doi.org/10.1080/00273171.2024.2436406","url":null,"abstract":"<p><p>In this article, we propose latent variable models that jointly account for responses and response times (RTs) in multidimensional personality measurements. We address two key research questions regarding the latent structure of RT distributions through model comparisons. First, we decompose RT into decision and non-decision times by incorporating irreducible minimum shifts in RT distributions, as done in cognitive decision-making models. Second, we investigate whether the speed factor underlying decision times should be multidimensional with the same latent structure as personality traits, or, if a unidimensional speed factor suffices. Comprehensive model comparisons across four distinct datasets suggest that a joint model with person-specific parameters to account for shifts in RT distributions and a unidimensional speed factor provides the best account for ordinal responses and RTs. Posterior predictive checks further confirm these findings. Additionally, simulation studies validate the parameter recovery of the proposed models and support the empirical results. Most importantly, failing to account for the irreducible minimum shift in RT distributions leads to systematic biases in other model components and severe underestimation of the nonlinear relationship between responses and RTs.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-30"},"PeriodicalIF":5.3,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anja F Ernst, Eva Ceulemans, Laura F Bringmann, Janne Adolf
{"title":"Evaluating Contextual Models for Intensive Longitudinal Data in the Presence of Noise.","authors":"Anja F Ernst, Eva Ceulemans, Laura F Bringmann, Janne Adolf","doi":"10.1080/00273171.2024.2436420","DOIUrl":"https://doi.org/10.1080/00273171.2024.2436420","url":null,"abstract":"<p><p>Nowadays research into affect frequently employs intensive longitudinal data to assess fluctuations in daily emotional experiences. The resulting data are often analyzed with moderated autoregressive models to capture the influences of contextual events on the emotion dynamics. The presence of noise (e.g., measurement error) in the measures of the contextual events, however, is commonly ignored in these models. Disregarding noise in these covariates when it is present may result in biased parameter estimates and wrong conclusions drawn about the underlying emotion dynamics. In a simulation study we evaluate the estimation accuracy, assessed in terms of bias and variance, of different moderated autoregressive models in the presence of noise in the covariate. We show that estimation accuracy decreases when the amount of noise in the covariate increases. We also show that this bias is magnified by a larger effect of the covariate, a slower switching frequency of the covariate, a discrete rather than a continuous covariate, and constant rather than occasional noise in the covariate. We also show that the bias that results from a noisy covariate does not decrease when the number of observations increases. We end with a few recommendations for applying moderated autoregressive models based on our simulation.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-21"},"PeriodicalIF":5.3,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jannis Kreienkamp, Maximilian Agostini, Rei Monden, Kai Epstude, Peter de Jonge, Laura F Bringmann
{"title":"A Gentle Introduction and Application of Feature-Based Clustering with Psychological Time Series.","authors":"Jannis Kreienkamp, Maximilian Agostini, Rei Monden, Kai Epstude, Peter de Jonge, Laura F Bringmann","doi":"10.1080/00273171.2024.2432918","DOIUrl":"10.1080/00273171.2024.2432918","url":null,"abstract":"<p><p>Psychological researchers and practitioners collect increasingly complex time series data aimed at identifying differences between the developments of participants or patients. Past research has proposed a number of dynamic measures that describe meaningful developmental patterns for psychological data (e.g., instability, inertia, linear trend). Yet, commonly used clustering approaches are often not able to include these meaningful measures (e.g., due to model assumptions). We propose feature-based time series clustering as a flexible, transparent, and well-grounded approach that clusters participants based on the dynamic measures directly using common clustering algorithms. We introduce the approach and illustrate the utility of the method with real-world empirical data that highlight common ESM challenges of multivariate conceptualizations, structural missingness, and non-stationary trends. We use the data to showcase the main steps of input selection, feature extraction, feature reduction, feature clustering, and cluster evaluation. We also provide practical algorithm overviews and readily available code for data preparation, analysis, and interpretation.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-31"},"PeriodicalIF":5.3,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}