Ivan Jacob Agaloos Pesigan, Michael A Russell, Sy-Miin Chow
{"title":"Inferences and effect sizes for direct, indirect, and total effects in continuous-time mediation models.","authors":"Ivan Jacob Agaloos Pesigan, Michael A Russell, Sy-Miin Chow","doi":"10.1037/met0000779","DOIUrl":"10.1037/met0000779","url":null,"abstract":"<p><p>Mediation modeling using longitudinal data is an exciting field that captures the interrelations in dynamic changes, such as mediated changes, over time. Even though discrete-time vector autoregressive approaches are commonly used to estimate indirect effects in longitudinal data, they have known limitations due to the dependency of inferential results on the time intervals between successive occasions and the assumption of regular spacing between measurements. Continuous-time vector autoregressive models have been proposed as an alternative to address these issues. Previous work in the area (e.g., Deboeck & Preacher, 2015; Ryan & Hamaker, 2021) has shown how the direct, indirect, and total effects, for a range of time-interval values, can be calculated using parameters estimated from continuous-time vector autoregressive models for causal inferential purposes. However, both standardized effects size measures and methods for calculating the uncertainty around the direct, indirect, and total effects in continuous-time mediation have yet to be explored. Drawing from the mediation model literature, we present and compare results using the delta, Monte Carlo, and parametric bootstrap methods to calculate SEs and confidence intervals for the direct, indirect, and total effects in continuous-time mediation for inferential purposes. Options to automate these inferential procedures and facilitate interpretations are available in the cTMed R package. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.8,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12494154/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145213089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2023-07-20DOI: 10.1037/met0000599
David Trafimow, Michael R Hyman, Alena Kostyk
{"title":"Enhancing predictive power by unamalgamating multi-item scales.","authors":"David Trafimow, Michael R Hyman, Alena Kostyk","doi":"10.1037/met0000599","DOIUrl":"10.1037/met0000599","url":null,"abstract":"<p><p>The generally small but touted as \"statistically significant\" correlation coefficients in the social sciences jeopardize theory testing and prediction. To investigate these small coefficients' underlying causes, traditional equations such as Spearman's (1904) classic attenuation formula, Cronbach's (1951) alpha, and Guilford and Fruchter's (1973) equation for the effect of additional items on a scale's predictive power are considered. These equations' implications differ regarding large interitem correlations enhancing or diminishing predictive power. Contrary to conventional practice, such correlations decrease predictive power when treating items as multi-item scale components but can increase predictive power when treating items separately. The implications are wide-ranging. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1043-1055"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9838250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2023-09-07DOI: 10.1037/met0000602
Irene Klugkist, Thom Benjamin Volker
{"title":"Bayesian evidence synthesis for informative hypotheses: An introduction.","authors":"Irene Klugkist, Thom Benjamin Volker","doi":"10.1037/met0000602","DOIUrl":"10.1037/met0000602","url":null,"abstract":"<p><p>To establish a theory one needs cleverly designed and well-executed studies with appropriate and correctly interpreted statistical analyses. Equally important, one also needs replications of such studies and a way to combine the results of several replications into an accumulated state of knowledge. An approach that provides an appropriate and powerful analysis for studies targeting prespecified theories is the use of Bayesian informative hypothesis testing. An additional advantage of the use of this Bayesian approach is that combining the results from multiple studies is straightforward. In this article, we discuss the behavior of Bayes factors in the context of evaluating informative hypotheses with multiple studies. By using simple models and (partly) analytical solutions, we introduce and evaluate Bayesian evidence synthesis (BES) and compare its results to Bayesian sequential updating. By doing so, we clarify how different replications or updating questions can be evaluated. In addition, we illustrate BES with two simulations, in which multiple studies are generated to resemble conceptual replications. The studies in these simulations are too heterogeneous to be aggregated with conventional research synthesis methods. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"949-965"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10173540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2023-12-25DOI: 10.1037/met0000624
Esther Maassen, E Damiano D'Urso, Marcel A L M van Assen, Michèle B Nuijten, Kim De Roover, Jelte M Wicherts
{"title":"The dire disregard of measurement invariance testing in psychological science.","authors":"Esther Maassen, E Damiano D'Urso, Marcel A L M van Assen, Michèle B Nuijten, Kim De Roover, Jelte M Wicherts","doi":"10.1037/met0000624","DOIUrl":"10.1037/met0000624","url":null,"abstract":"<p><p>Self-report scales are widely used in psychology to compare means in latent constructs across groups, experimental conditions, or time points. However, for these comparisons to be meaningful and unbiased, the scales must demonstrate measurement invariance (MI) across compared time points or (experimental) groups. MI testing determines whether the latent constructs are measured equivalently across groups or time, which is essential for meaningful comparisons. We conducted a systematic review of 426 psychology articles with openly available data, to (a) examine common practices in conducting and reporting of MI testing, (b) assess whether we could reproduce the reported MI results, and (c) conduct MI tests for the comparisons that enabled sufficiently powerful MI testing. We identified 96 articles that contained a total of 929 comparisons. Results showed that only 4% of the 929 comparisons underwent MI testing, and the tests were generally poorly reported. None of the reported MI tests were reproducible, and only 26% of the 174 newly performed MI tests reached sufficient (scalar) invariance, with MI failing completely in 58% of tests. Exploratory analyses suggested that in nearly half of the comparisons where configural invariance was rejected, the number of factors differed between groups. These results indicate that MI tests are rarely conducted and poorly reported in psychological studies. We observed frequent violations of MI, suggesting that reported differences between (experimental) groups may not be solely attributed to group differences in the latent constructs. We offer recommendations aimed at improving reporting and computational reproducibility practices in psychology. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"966-979"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139037948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2023-10-16DOI: 10.1037/met0000615
Mar J F Ollero, Eduardo Estrada, Michael D Hunter, Pablo F Cáncer
{"title":"Characterizing affect dynamics with a damped linear oscillator model: Theoretical considerations and recommendations for individual-level applications.","authors":"Mar J F Ollero, Eduardo Estrada, Michael D Hunter, Pablo F Cáncer","doi":"10.1037/met0000615","DOIUrl":"10.1037/met0000615","url":null,"abstract":"<p><p>People show stable differences in the way their affect fluctuates over time. Within the general framework of dynamical systems, the damped linear oscillator (DLO) model has been proposed as a useful approach to study affect dynamics. The DLO model can be applied to repeated measures provided by a single individual, and the resulting parameters can capture relevant features of the person's affect dynamics. Focusing on negative affect, we provide an accessible interpretation of the DLO model parameters in terms of emotional lability, resilience, and vulnerability. We conducted a Monte Carlo study to test the DLO model performance under different empirically relevant conditions in terms of individual characteristics and sampling scheme. We used state-space models in continuous time. The results show that, under certain conditions, the DLO model is able to accurately and efficiently recover the parameters underlying the affective dynamics of a single individual. We discuss the results and the theoretical and practical implications of using this model, illustrate how to use it for studying psychological phenomena at the individual level, and provide specific recommendations on how to collect data for this purpose. We also provide a tutorial website and computer code in R to implement this approach. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1095-1112"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41238100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2023-08-10DOI: 10.1037/met0000605
Miriam Brinberg, Graham D Bodie, Denise H Solomon, Susanne M Jones, Nilam Ram
{"title":"Examining individual differences in how interaction behaviors change over time: A dyadic multinomial logistic growth modeling approach.","authors":"Miriam Brinberg, Graham D Bodie, Denise H Solomon, Susanne M Jones, Nilam Ram","doi":"10.1037/met0000605","DOIUrl":"10.1037/met0000605","url":null,"abstract":"<p><p>Several theoretical perspectives suggest that dyadic experiences are distinguished by patterns of behavioral change that emerge during interactions. Methods for examining change in behavior over time are well elaborated for the study of change along continuous dimensions. Extensions for charting increases and decreases in individuals' use of specific, categorically defined behaviors, however, are rarely invoked. Greater accessibility of Bayesian frameworks that facilitate formulation and estimation of the requisite models is opening new possibilities. This article provides a primer on how multinomial logistic growth models can be used to examine between-dyad differences in within-dyad behavioral change over the course of an interaction. We describe and illustrate how these models are implemented in the Bayesian framework using data from support conversations between strangers (<i>N</i> = 118 dyads) to examine (RQ1) how six types of listeners' and disclosers' behaviors change as support conversations unfold and (RQ2) how the disclosers' preconversation distress moderates the change in conversation behaviors. The primer concludes with a series of notes on (a) implications of modeling choices, (b) flexibility in modeling nonlinear change, (c) necessity for theory that specifies how and why change trajectories differ, and (d) how multinomial logistic growth models can help refine current theory about dyadic interaction. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1079-1094"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9967422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2023-12-21DOI: 10.1037/met0000630
Eunsook Kim, Yan Wang, Hsien-Yuan Hsu
{"title":"A systematic review of and reflection on the applications of factor mixture modeling.","authors":"Eunsook Kim, Yan Wang, Hsien-Yuan Hsu","doi":"10.1037/met0000630","DOIUrl":"10.1037/met0000630","url":null,"abstract":"<p><p>Factor mixture modeling (FMM) incorporates both continuous latent variables and categorical latent variables in a single analytic model clustering items and observations simultaneously. After two decades since the introduction of FMM to psychological and behavioral science research, it is an opportune time to review FMM applications to understand how these applications are utilized in real-world research. We conducted a systematic review of 76 FMM applications. We developed a comprehensive coding scheme based on the current methodological literature of FMM and evaluated common usages and practices of FMM. Based on the review, we identify challenges and issues that applied researchers encounter in the practice of FMM and provide practical suggestions to promote well-informed decision making. Lastly, we discuss future methodological directions and suggest how FMM can be expanded beyond its typical use in applied studies. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"997-1016"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138831225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2023-11-06DOI: 10.1037/met0000612
Manshu Yang, Darrell J Gaskin
{"title":"Handling missing data in partially clustered randomized controlled trials.","authors":"Manshu Yang, Darrell J Gaskin","doi":"10.1037/met0000612","DOIUrl":"10.1037/met0000612","url":null,"abstract":"<p><p>Partially clustered designs are widely used in psychological research, especially in randomized controlled trials that examine the effectiveness of prevention or intervention strategies. In a partially clustered trial, individuals are clustered into intervention groups in one or more study arms, for the purpose of intervention delivery, whereas individuals in other arms (e.g., the waitlist control arm) are unclustered. Missing data are almost inevitable in partially clustered trials and could pose a major challenge in drawing valid research conclusions. This article focuses on handling auxiliary-variable-dependent missing at random data in partially clustered studies. Five methods were compared via a simulation study, including simultaneous multiple imputation using joint modeling (MI-JM-SIM), arm-specific multiple imputation using joint modeling (MI-JM-AS), arm-specific multiple imputation using substantive-model-compatible sequential modeling (MI-SMC-AS), sequential fully Bayesian estimation using noninformative priors (SFB-NON), and sequential fully Bayesian estimation using weakly informative priors (SFB-WEAK). The results suggest that the MI-JM-AS method outperformed other methods when the variables with missing values only involved fixed effects, whereas the MI-SMC-AS method was preferred if the incomplete variables featured random effects. Applications of different methods are also illustrated using an empirical data example. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"927-948"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71485253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2023-03-23DOI: 10.1037/met0000572
Samantha F Anderson, Xinran Liu
{"title":"Questionable research practices and cumulative science: The consequences of selective reporting on effect size bias and heterogeneity.","authors":"Samantha F Anderson, Xinran Liu","doi":"10.1037/met0000572","DOIUrl":"10.1037/met0000572","url":null,"abstract":"<p><p>Despite increased attention to open science and transparency, questionable research practices (QRPs) remain common, and studies published using QRPs will remain a part of the published record for some time. A particularly common type of QRP involves multiple testing, and in some forms of this, researchers report only a selection of the tests conducted. Methodological investigations of multiple testing and QRPs have often focused on implications for a single study, as well as how these practices can increase the likelihood of false positive results. However, it is illuminating to consider the role of these QRPs from a broader, literature-wide perspective, focusing on consequences that affect the interpretability of results across the literature. In this article, we use a Monte Carlo simulation study to explore the consequences of two QRPs involving multiple testing, cherry picking and question trolling, on effect size bias and heterogeneity among effect sizes. Importantly, we explicitly consider the role of real-world conditions, including sample size, effect size, and publication bias, that amend the influence of these QRPs. Results demonstrated that QRPs can substantially affect both bias and heterogeneity, although there were many nuances, particularly relating to the influence of publication bias, among other factors. The present study adds a new perspective to how QRPs may influence researchers' ability to evaluate a literature accurately and cumulatively, and points toward yet another reason to continue to advocate for initiatives that reduce QRPs. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1017-1042"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9367002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-10-01Epub Date: 2024-02-08DOI: 10.1037/met0000646
Timothy Hayes
{"title":"Individual-level probabilities and cluster-level proportions: Toward interpretable level 2 estimates in unconflated multilevel models for binary outcomes.","authors":"Timothy Hayes","doi":"10.1037/met0000646","DOIUrl":"10.1037/met0000646","url":null,"abstract":"<p><p>Multilevel models allow researchers to test hypotheses at multiple levels of analysis-for example, assessing the effects of both individual-level and school-level predictors on a target outcome. To assess these effects with the greatest clarity, researchers are well-advised to cluster mean center all Level 1 predictors and explicitly incorporate the cluster means into the model at Level 2. When an outcome of interest is continuous, this unconflated model specification serves both to increase model accuracy, by separating the level-specific effects of each predictor, and to increase model interpretability, by reframing the random intercepts as unadjusted cluster means. When an outcome of interest is binary or ordinal, however, only the first of these benefits is fully realized: In these models, the intuitive cluster mean interpretations of Level 2 effects are only available on the metric of the linear predictor (e.g., the logit) or, equivalently, the latent response propensity, <i>y</i><sub>ij</sub>∗. Because the calculations for obtaining predicted probabilities, odds, and <i>OR</i>s operate on the entire combined model equation, the interpretations of these quantities are inextricably tied to individual-level, rather than cluster-level, outcomes. This is unfortunate, given that the probability and odds metrics are often of greatest interest to researchers in practice. To address this issue, I propose a novel rescaling method designed to calculate cluster average success proportions, odds, and <i>OR</i>s in two-level binary and ordinal logistic and probit models. I apply the approach to a real data example and provide supplemental R functions to help users implement the method easily. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1113-1132"},"PeriodicalIF":7.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139707688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}