{"title":"Relative importance analysis in multiple mediator models.","authors":"Xun Zhu, Xin Gu","doi":"10.1037/met0000725","DOIUrl":"https://doi.org/10.1037/met0000725","url":null,"abstract":"<p><p>Mediation analysis is widely used in psychological research to identify the relationship between independent and dependent variables through mediators. Assessing the relative importance of mediators in parallel mediator models can help researchers better understand mediation effects and guide interventions. The traditional coefficient-based measures of indirect effect merely focus on the partial effect of each mediator, which may reach undesirable results of importance assessment. This study develops a new method of measuring the importance of multiple mediators. Three <i>R</i>² measures of indirect effect proposed by MacKinnon (2008) are extended to parallel mediator models. Dominance analysis, a popular method of evaluating relative importance, is applied to decompose the <i>R</i>² indirect effect and attribute it to each mediator. This offers new measures of indirect effect in terms of relative importance. Both frequentist and Bayesian methods are used to make statistical inference for the dominance measures. Simulation studies investigate the performance of the dominance measures and their inference. A real data example illustrates how the relative importance can be assessed in multiple mediator models. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143415025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Missing not at random intensive longitudinal data with dynamic structural equation models.","authors":"Daniel McNeish","doi":"10.1037/met0000742","DOIUrl":"https://doi.org/10.1037/met0000742","url":null,"abstract":"<p><p>Intensive longitudinal designs are increasingly popular for assessing moment-to-moment changes in mood, affect, and interpersonal or health behavior. Compliance in these studies is never perfect given the high frequency of data collection, so missing data are unavoidable. Nonetheless, there is relatively little existing research on missing data within dynamic structural equation models, a recently proposed framework for modeling intensive longitudinal data. The few studies that exist tend to focus on methods appropriate for data that are missing at random (MAR). However, missing not at random (MNAR) data are prevalent, particularly when the interest is a sensitive outcome related to mental health, substance use, or sexual behavior. As a motivating example, a study on people with binge eating disorder that has large amounts of missingness in a self-report item related to overeating is considered. Missingness may be high because participants felt shame reporting this behavior, which is a clear case of MNAR and for which methods like multiple imputation and full-information maximum likelihood are less effective. To improve handling of MNAR intensive longitudinal data, embedding a Diggle-Kenward-type MNAR model within a dynamic structural equation model is proposed. This approach is straightforward to apply in popular software like Mplus and only requires a few extra lines of code relative to models that assume MAR. Results from the proposed approach are contrasted with results from models that assume MAR, and a simulation study is conducted to study performance of the proposed model with continuous or binary outcomes. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A peculiarity in psychological measurement practices.","authors":"Mark White","doi":"10.1037/met0000731","DOIUrl":"https://doi.org/10.1037/met0000731","url":null,"abstract":"<p><p>This essay discusses a peculiarity in institutionalized psychological measurement practices. Namely, an inherent contradiction between guidelines for how scales/tests are developed and how those scales/tests are typically analyzed. Best practices for developing scales/tests emphasize developing individual items or subsets of items to capture unique aspects of constructs, such that the full construct is captured across the test. Analysis approaches, typically factor analysis or related reflective models, assume that no individual item (nor a subset of items) captures unique, construct-relevant variance. This contradiction has important implications for the use of factor analysis to support measurement claims. The implications and other critiques of factor analysis are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliability in unidimensional ordinal data: A comparison of continuous and ordinal estimators.","authors":"Eunseong Cho, Sébastien Béland","doi":"10.1037/met0000739","DOIUrl":"https://doi.org/10.1037/met0000739","url":null,"abstract":"<p><p>This study challenges three common methodological beliefs and practices. The first question examines whether ordinal reliability estimators are more accurate than continuous estimators for unidimensional data with uncorrelated errors. Continuous estimators (e.g., coefficient alpha) can be applied to both continuous and ordinal data, while ordinal estimators (e.g., ordinal alpha and categorical omega) are specific to ordinal data. Although ordinal estimators are often argued to have conceptual advantages, comprehensive investigations into their accuracy are limited. The second question explores the relationship between skewness and kurtosis in ordinal data. Previous simulation studies have primarily examined cases where skewness and kurtosis change in the same direction, leaving gaps in understanding their independent effects. The third question addresses item response theory (IRT) models: Should the scaling constant always be fixed at the same value (e.g., 1.7)? To answer these questions, this study conducted a Monte Carlo simulation comparing four continuous estimators and eight ordinal estimators. The results indicated that most estimators achieved acceptable levels of accuracy. On average, ordinal estimators were slightly less accurate than continuous estimators, though the difference was smaller than what most users would consider practically significant (e.g., less than 0.01). However, ordinal alpha stood out as a notable exception, severely overestimating reliability across various conditions. Regarding the scaling constant in IRT models, the results indicated that its optimal value varied depending on the data type (e.g., dichotomous vs. polytomous). In some cases, values below 1.7 were optimal, while in others, values above 1.8 were optimal. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The relationship between the phi coefficient and the unidimensionality index H: Improving psychological scaling from the ground up.","authors":"Johannes Titz","doi":"10.1037/met0000736","DOIUrl":"https://doi.org/10.1037/met0000736","url":null,"abstract":"<p><p>To study the dimensional structure of psychological phenomena, a precise definition of unidimensionality is essential. Most definitions of unidimensionality rely on factor analysis. However, the reliability of factor analysis depends on the input data, which primarily consists of Pearson correlations. A significant issue with Pearson correlations is that they are almost guaranteed to underestimate unidimensionality, rendering them unsuitable for evaluating the unidimensionality of a scale. This article formally demonstrates that the simple unidimensionality index <i>H</i> is always at least as high as, or higher than, the Pearson correlation for dichotomous and polytomous items (φ). Leveraging this inequality, a case is presented where five dichotomous items are perfectly unidimensional, yet factor analysis based on φ incorrectly suggests a two-dimensional solution. To illustrate that this issue extends beyond theoretical scenarios, an analysis of real data from a statistics exam (<i>N</i> = 133) is conducted, revealing the same problem. An in-depth analysis of the exam data shows that violations of unidimensionality are systematic and should not be dismissed as mere noise. Inconsistent answering patterns can indicate whether a participant blundered, cheated, or has conceptual misunderstandings, information typically overlooked by traditional scaling procedures based on correlations. The conclusion is that psychologists should consider unidimensionality not as a peripheral concern but as the foundation for any serious scaling attempt. The index <i>H</i> could play a crucial role in establishing this foundation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wes Bonifay, Li Cai, Carl F Falk, Kristopher J Preacher
{"title":"Reassessing the fitting propensity of factor models.","authors":"Wes Bonifay, Li Cai, Carl F Falk, Kristopher J Preacher","doi":"10.1037/met0000735","DOIUrl":"https://doi.org/10.1037/met0000735","url":null,"abstract":"<p><p>Model complexity is a critical consideration when evaluating a statistical model. To quantify complexity, one can examine fitting propensity (FP), or the ability of the model to fit well to diverse patterns of data. The scant foundational research on FP has focused primarily on proof of concept rather than practical application. To address this oversight, the present work joins a recently published study in examining the FP of models that are commonly applied in factor analysis. We begin with a historical account of statistical model evaluation, which refutes the notion that complexity can be fully understood by counting the number of free parameters in the model. We then present three sets of analytic examples to better understand the FP of exploratory and confirmatory factor analysis models that are widely used in applied research. We characterize our findings relative to previously disseminated claims about factor model FP. Finally, we provide some recommendations for future research on FP in latent variable modeling. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143391579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2023-06-12DOI: 10.1037/met0000593
Matthew J Valente, Judith J M Rijnhart, Oscar Gonzalez
{"title":"A novel approach to estimate moderated treatment effects and moderated mediated effects with continuous moderators.","authors":"Matthew J Valente, Judith J M Rijnhart, Oscar Gonzalez","doi":"10.1037/met0000593","DOIUrl":"10.1037/met0000593","url":null,"abstract":"<p><p>Moderation analysis is used to study under what conditions or for which subgroups of individuals a treatment effect is stronger or weaker. When a moderator variable is categorical, such as assigned sex, treatment effects can be estimated for each group resulting in a treatment effect for males and a treatment effect for females. If a moderator variable is a continuous variable, a strategy for investigating moderated treatment effects is to estimate conditional effects (i.e., simple slopes) via the pick-a-point approach. When conditional effects are estimated using the pick-a-point approach, the conditional effects are often given the interpretation of \"the treatment effect for the subgroup of individuals….\" However, the interpretation of these conditional effects as <i>subgroup</i> effects is potentially misleading because conditional effects are interpreted at a specific value of the moderator variable (e.g., +1 <i>SD</i> above the mean). We describe a simple solution that resolves this problem using a simulation-based approach. We describe how to apply this simulation-based approach to estimate subgroup effects by defining subgroups using a <i>range of scores</i> on the continuous moderator variable. We apply this method to three empirical examples to demonstrate how to estimate subgroup effects for moderated treatment and moderated mediated effects when the moderator variable is a continuous variable. Finally, we provide researchers with both SAS and R code to implement this method for similar situations described in this paper. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1-15"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10713862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9620515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2023-03-27DOI: 10.1037/met0000554
Beth Baribault, Anne G E Collins
{"title":"Troubleshooting Bayesian cognitive models.","authors":"Beth Baribault, Anne G E Collins","doi":"10.1037/met0000554","DOIUrl":"10.1037/met0000554","url":null,"abstract":"<p><p>Using Bayesian methods to apply computational models of cognitive processes, or <i>Bayesian cognitive modeling</i>, is an important new trend in psychological research. The rise of Bayesian cognitive modeling has been accelerated by the introduction of software that efficiently automates the Markov chain Monte Carlo sampling used for Bayesian model fitting-including the popular Stan and PyMC packages, which automate the dynamic Hamiltonian Monte Carlo and No-U-Turn Sampler (HMC/NUTS) algorithms that we spotlight here. Unfortunately, Bayesian cognitive models can struggle to pass the growing number of diagnostic checks required of Bayesian models. If any failures are left undetected, inferences about cognition based on the model's output may be biased or incorrect. As such, Bayesian cognitive models almost always require <i>troubleshooting</i> before being used for inference. Here, we present a deep treatment of the diagnostic checks and procedures that are critical for effective troubleshooting, but are often left underspecified by tutorial papers. After a conceptual introduction to Bayesian cognitive modeling and HMC/NUTS sampling, we outline the diagnostic metrics, procedures, and plots necessary to detect problems in model output with an emphasis on how these requirements have recently been changed and extended. Throughout, we explain how uncovering the exact nature of the problem is often the key to identifying solutions. We also demonstrate the troubleshooting process for an example hierarchical Bayesian model of reinforcement learning, including supplementary code. With this comprehensive guide to techniques for detecting, identifying, and overcoming problems in fitting Bayesian cognitive models, psychologists across subfields can more confidently build and use Bayesian cognitive models in their research. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"128-154"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10522800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9188270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2023-05-25DOI: 10.1037/met0000579
Pablo Nájera, Francisco J Abad, Miguel A Sorrel
{"title":"Is exploratory factor analysis always to be preferred? A systematic comparison of factor analytic techniques throughout the confirmatory-exploratory continuum.","authors":"Pablo Nájera, Francisco J Abad, Miguel A Sorrel","doi":"10.1037/met0000579","DOIUrl":"10.1037/met0000579","url":null,"abstract":"<p><p>The number of available factor analytic techniques has been increasing in the last decades. However, the lack of clear guidelines and exhaustive comparison studies between the techniques might hinder that these valuable methodological advances make their way to applied research. The present paper evaluates the performance of confirmatory factor analysis (CFA), CFA with sequential model modification using modification indices and the Saris procedure, exploratory factor analysis (EFA) with different rotation procedures (Geomin, target, and objectively refined target matrix), Bayesian structural equation modeling (BSEM), and a new set of procedures that, after fitting an unrestrictive model (i.e., EFA, BSEM), identify and retain only the relevant loadings to provide a parsimonious CFA solution (ECFA, BCFA). By means of an exhaustive Monte Carlo simulation study and a real data illustration, it is shown that CFA and BSEM are overly stiff and, consequently, do not appropriately recover the structure of slightly misspecified models. EFA usually provides the most accurate parameter estimates, although the rotation procedure choice is of major importance, especially depending on whether the latent factors are correlated or not. Finally, ECFA might be a sound option whenever an a priori structure cannot be hypothesized and the latent factors are correlated. Moreover, it is shown that the pattern of the results of a factor analytic technique can be somehow predicted based on its positioning in the confirmatory-exploratory continuum. Applied recommendations are given for the selection of the most appropriate technique under different representative scenarios by means of a detailed flowchart. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"16-39"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9876148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2024-07-18DOI: 10.1037/met0000665
Charles C Driver
{"title":"Inference with cross-lagged effects-Problems in time.","authors":"Charles C Driver","doi":"10.1037/met0000665","DOIUrl":"10.1037/met0000665","url":null,"abstract":"<p><p>The interpretation of cross-effects from vector autoregressive models to infer structure and causality among constructs is widespread and sometimes problematic. I describe problems in the interpretation of cross-effects when processes that are thought to fluctuate continuously in time are, as is typically done, modeled as changing only in discrete steps (as in e.g., structural equation modeling)-zeroes in a discrete-time temporal matrix do not necessarily correspond to zero effects in the underlying continuous processes, and vice versa. This has implications for the common case when the presence or absence of cross-effects is used for inference about underlying causal processes. I demonstrate these problems via simulation, and also show that when an underlying set of processes are continuous in time, even relatively few direct causal links can result in much denser temporal effect matrices in discrete-time. I demonstrate one solution to these issues, namely parameterizing the system as a stochastic differential equation and focusing inference on the continuous-time temporal effects. I follow this with some discussion of issues regarding the switch to continuous-time, specifically regularization, appropriate measurement time lag, and model order. An empirical example using intensive longitudinal data highlights some of the complexities of applying such approaches to real data, particularly with respect to model specification, examining misspecification, and parameter interpretation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"174-202"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}