Psychological methodsPub Date : 2025-02-01Epub Date: 2023-06-12DOI: 10.1037/met0000593
Matthew J Valente, Judith J M Rijnhart, Oscar Gonzalez
{"title":"A novel approach to estimate moderated treatment effects and moderated mediated effects with continuous moderators.","authors":"Matthew J Valente, Judith J M Rijnhart, Oscar Gonzalez","doi":"10.1037/met0000593","DOIUrl":"10.1037/met0000593","url":null,"abstract":"<p><p>Moderation analysis is used to study under what conditions or for which subgroups of individuals a treatment effect is stronger or weaker. When a moderator variable is categorical, such as assigned sex, treatment effects can be estimated for each group resulting in a treatment effect for males and a treatment effect for females. If a moderator variable is a continuous variable, a strategy for investigating moderated treatment effects is to estimate conditional effects (i.e., simple slopes) via the pick-a-point approach. When conditional effects are estimated using the pick-a-point approach, the conditional effects are often given the interpretation of \"the treatment effect for the subgroup of individuals….\" However, the interpretation of these conditional effects as <i>subgroup</i> effects is potentially misleading because conditional effects are interpreted at a specific value of the moderator variable (e.g., +1 <i>SD</i> above the mean). We describe a simple solution that resolves this problem using a simulation-based approach. We describe how to apply this simulation-based approach to estimate subgroup effects by defining subgroups using a <i>range of scores</i> on the continuous moderator variable. We apply this method to three empirical examples to demonstrate how to estimate subgroup effects for moderated treatment and moderated mediated effects when the moderator variable is a continuous variable. Finally, we provide researchers with both SAS and R code to implement this method for similar situations described in this paper. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1-15"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10713862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9620515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2023-03-27DOI: 10.1037/met0000554
Beth Baribault, Anne G E Collins
{"title":"Troubleshooting Bayesian cognitive models.","authors":"Beth Baribault, Anne G E Collins","doi":"10.1037/met0000554","DOIUrl":"10.1037/met0000554","url":null,"abstract":"<p><p>Using Bayesian methods to apply computational models of cognitive processes, or <i>Bayesian cognitive modeling</i>, is an important new trend in psychological research. The rise of Bayesian cognitive modeling has been accelerated by the introduction of software that efficiently automates the Markov chain Monte Carlo sampling used for Bayesian model fitting-including the popular Stan and PyMC packages, which automate the dynamic Hamiltonian Monte Carlo and No-U-Turn Sampler (HMC/NUTS) algorithms that we spotlight here. Unfortunately, Bayesian cognitive models can struggle to pass the growing number of diagnostic checks required of Bayesian models. If any failures are left undetected, inferences about cognition based on the model's output may be biased or incorrect. As such, Bayesian cognitive models almost always require <i>troubleshooting</i> before being used for inference. Here, we present a deep treatment of the diagnostic checks and procedures that are critical for effective troubleshooting, but are often left underspecified by tutorial papers. After a conceptual introduction to Bayesian cognitive modeling and HMC/NUTS sampling, we outline the diagnostic metrics, procedures, and plots necessary to detect problems in model output with an emphasis on how these requirements have recently been changed and extended. Throughout, we explain how uncovering the exact nature of the problem is often the key to identifying solutions. We also demonstrate the troubleshooting process for an example hierarchical Bayesian model of reinforcement learning, including supplementary code. With this comprehensive guide to techniques for detecting, identifying, and overcoming problems in fitting Bayesian cognitive models, psychologists across subfields can more confidently build and use Bayesian cognitive models in their research. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"128-154"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10522800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9188270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2023-05-25DOI: 10.1037/met0000579
Pablo Nájera, Francisco J Abad, Miguel A Sorrel
{"title":"Is exploratory factor analysis always to be preferred? A systematic comparison of factor analytic techniques throughout the confirmatory-exploratory continuum.","authors":"Pablo Nájera, Francisco J Abad, Miguel A Sorrel","doi":"10.1037/met0000579","DOIUrl":"10.1037/met0000579","url":null,"abstract":"<p><p>The number of available factor analytic techniques has been increasing in the last decades. However, the lack of clear guidelines and exhaustive comparison studies between the techniques might hinder that these valuable methodological advances make their way to applied research. The present paper evaluates the performance of confirmatory factor analysis (CFA), CFA with sequential model modification using modification indices and the Saris procedure, exploratory factor analysis (EFA) with different rotation procedures (Geomin, target, and objectively refined target matrix), Bayesian structural equation modeling (BSEM), and a new set of procedures that, after fitting an unrestrictive model (i.e., EFA, BSEM), identify and retain only the relevant loadings to provide a parsimonious CFA solution (ECFA, BCFA). By means of an exhaustive Monte Carlo simulation study and a real data illustration, it is shown that CFA and BSEM are overly stiff and, consequently, do not appropriately recover the structure of slightly misspecified models. EFA usually provides the most accurate parameter estimates, although the rotation procedure choice is of major importance, especially depending on whether the latent factors are correlated or not. Finally, ECFA might be a sound option whenever an a priori structure cannot be hypothesized and the latent factors are correlated. Moreover, it is shown that the pattern of the results of a factor analytic technique can be somehow predicted based on its positioning in the confirmatory-exploratory continuum. Applied recommendations are given for the selection of the most appropriate technique under different representative scenarios by means of a detailed flowchart. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"16-39"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9876148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2024-07-18DOI: 10.1037/met0000665
Charles C Driver
{"title":"Inference with cross-lagged effects-Problems in time.","authors":"Charles C Driver","doi":"10.1037/met0000665","DOIUrl":"10.1037/met0000665","url":null,"abstract":"<p><p>The interpretation of cross-effects from vector autoregressive models to infer structure and causality among constructs is widespread and sometimes problematic. I describe problems in the interpretation of cross-effects when processes that are thought to fluctuate continuously in time are, as is typically done, modeled as changing only in discrete steps (as in e.g., structural equation modeling)-zeroes in a discrete-time temporal matrix do not necessarily correspond to zero effects in the underlying continuous processes, and vice versa. This has implications for the common case when the presence or absence of cross-effects is used for inference about underlying causal processes. I demonstrate these problems via simulation, and also show that when an underlying set of processes are continuous in time, even relatively few direct causal links can result in much denser temporal effect matrices in discrete-time. I demonstrate one solution to these issues, namely parameterizing the system as a stochastic differential equation and focusing inference on the continuous-time temporal effects. I follow this with some discussion of issues regarding the switch to continuous-time, specifically regularization, appropriate measurement time lag, and model order. An empirical example using intensive longitudinal data highlights some of the complexities of applying such approaches to real data, particularly with respect to model specification, examining misspecification, and parameter interpretation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"174-202"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2023-08-10DOI: 10.1037/met0000586
Philipp Sterner, David Goretzko, Florian Pargent
{"title":"Everything has its price: Foundations of cost-sensitive machine learning and its application in psychology.","authors":"Philipp Sterner, David Goretzko, Florian Pargent","doi":"10.1037/met0000586","DOIUrl":"10.1037/met0000586","url":null,"abstract":"<p><p>Psychology has seen an increase in the use of machine learning (ML) methods. In many applications, observations are classified into one of two groups (binary classification). Off-the-shelf classification algorithms assume that the costs of a misclassification (false positive or false negative) are equal. Because this is often not reasonable (e.g., in clinical psychology), cost-sensitive machine learning (CSL) methods can take different cost ratios into account. We present the mathematical foundations and introduce a taxonomy of the most commonly used CSL methods, before demonstrating their application and usefulness on psychological data, that is, the drug consumption data set (<i>N</i> = 1, 885) from the University of California Irvine ML Repository. In our example, all demonstrated CSL methods noticeably reduced mean misclassification costs compared to regular ML algorithms. We discuss the necessity for researchers to perform small benchmarks of CSL methods for their own practical application. Thus, our open materials provide R code, demonstrating how CSL methods can be applied within the mlr3 framework (https://osf.io/cvks7/). (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"112-127"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9967423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2023-01-09DOI: 10.1037/met0000539
Diego G Campos, Mike W-L Cheung, Ronny Scherer
{"title":"A primer on synthesizing individual participant data obtained from complex sampling surveys: A two-stage IPD meta-analysis approach.","authors":"Diego G Campos, Mike W-L Cheung, Ronny Scherer","doi":"10.1037/met0000539","DOIUrl":"10.1037/met0000539","url":null,"abstract":"<p><p>The increasing availability of individual participant data (IPD) in the social sciences offers new possibilities to synthesize research evidence across primary studies. Two-stage IPD meta-analysis represents a framework that can utilize these possibilities. While most of the methodological research on two-stage IPD meta-analysis focused on its performance compared with other approaches, dealing with the complexities of the primary and meta-analytic data has received little attention, particularly when IPD are drawn from complex sampling surveys. Complex sampling surveys often feature clustering, stratification, and multistage sampling to obtain nationally or internationally representative data from a target population. Furthermore, IPD from these studies is likely to provide more than one effect size. To address these complexities, we propose a two-stage meta-analytic approach that generates model-based effect sizes in Stage 1 and synthesizes them in Stage 2. We present a sequence of steps, illustrate their implementation, and discuss the methodological decisions and options within. Given its flexibility to deal with the complex nature of the primary and meta-analytic data and its ability to combine multiple IPD sets or IPD with aggregated data, the proposed two-stage approach opens up new analytic possibilities for synthesizing knowledge from complex sampling surveys. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"83-111"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10501727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-02-01Epub Date: 2024-02-15DOI: 10.1037/met0000643
Fabio Mason, Eva Cantoni, Paolo Ghisletta
{"title":"Linear mixed models and latent growth curve models for group comparison studies contaminated by outliers.","authors":"Fabio Mason, Eva Cantoni, Paolo Ghisletta","doi":"10.1037/met0000643","DOIUrl":"10.1037/met0000643","url":null,"abstract":"<p><p>The linear mixed model (LMM) and latent growth model (LGM) are frequently applied to within-subject two-group comparison studies to investigate group differences in the time effect, supposedly due to differential group treatments. Yet, research about LMM and LGM in the presence of outliers (defined as observations with a very low probability of occurrence if assumed from a given distribution) is scarce. Moreover, when such research exists, it focuses on estimation properties (bias and efficiency), neglecting inferential characteristics (e.g., power and type-I error). We study power and type-I error rates of Wald-type and bootstrap confidence intervals (CIs), as well as coverage and length of CIs and mean absolute error (MAE) of estimates, associated with classical and robust estimations of LMM and LGM, applied to a within-subject two-group comparison design. We conduct a Monte Carlo simulation experiment to compare CIs and MAEs under different conditions: data (a) without contamination, (b) contaminated by within-subject outliers, (c) contaminated by between-subject outliers, and (d) both contaminated by within- and between-subject outliers. Results show that without contamination, methods perform similarly, except CIs based on S, a robust LMM estimator, which are slightly less close to nominal values in their coverage. However, in the presence of both within- and between-subject outliers, CIs based on robust estimators, especially S, performed better than those of classical methods. In particular, the percentile CI with the wild bootstrap applied to the robust LMM estimators outperformed all other methods, especially with between-subject outliers, when we found the classical Wald-type CI based on the t statistic with Satterthwaite approximation for LMM to be highly misleading. We provide R code to compute all methods presented here. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"155-173"},"PeriodicalIF":7.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139735975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of two independent populations of compositional data with positive correlations among components using a nested dirichlet distribution.","authors":"Jacob A Turner,Bianca A Luedeker,Monnie McGee","doi":"10.1037/met0000702","DOIUrl":"https://doi.org/10.1037/met0000702","url":null,"abstract":"Compositional data are multivariate data made up of components that sum to a fixed value. Often the data are presented as proportions of a whole, where the value of each component is constrained to be between 0 and 1 and the sum of the components is 1. There are many applications in psychology and other disciplines that yield compositional data sets including Morris water maze experiments, psychological well-being scores, analysis of daily physical activity times, and components of household expenditures. Statistical methods exist for compositional data and typically consist of two approaches. The first is to use transformation strategies, such as log ratios, which can lead to results that are challenging to interpret. The second involves using an appropriate distribution, such as the Dirichlet distribution, that captures the key characteristics of compositional data, and allows for ready interpretation of downstream analysis. Unfortunately, the Dirichlet distribution has constraints on variance and correlation that render it inappropriate for some applications. As a result, practicing researchers will often resort to standard two-sample t test or analysis of variance models for each variable in the composition to detect differences in means. We show that a recently published method using the Dirichlet distribution can drastically inflate Type I error rates, and we introduce a global two-sample test to detect differences in mean proportion of components for two independent groups where both groups are from either a Dirichlet or a more flexible nested Dirichlet distribution. We also derive confidence interval formulas for individual components for post hoc testing and further interpretation of results. We illustrate the utility of our methods using a recent Morris water maze experiment and human activity data. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"7 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142989142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangjian Zhang, Dayoung Lee, Yilin Li, Anthony Ong
{"title":"Dynamic factor analysis with multivariate time series of multiple individuals: An error-corrected estimation method.","authors":"Guangjian Zhang, Dayoung Lee, Yilin Li, Anthony Ong","doi":"10.1037/met0000722","DOIUrl":"https://doi.org/10.1037/met0000722","url":null,"abstract":"<p><p>Intensive longitudinal data, increasingly common in social and behavioral sciences, often consist of multivariate time series from multiple individuals. Dynamic factor analysis, combining factor analysis and time series analysis, has been used to uncover individual-specific processes from single-individual time series. However, integrating these processes across individuals is challenging due to estimation errors in individual-specific parameter estimates. We propose a method that integrates individual-specific processes while accommodating the corresponding estimation error. This method is computationally efficient and robust against model specification errors and nonnormal data. We compare our method with a Naive approach that ignores estimation error using both empirical and simulated data. The two methods produced similar estimates for fixed effect parameters, but the proposed method produced more satisfactory estimates for random effects than the Naive method. The relative advantage of the proposed method was more substantial for short to moderately long time series (<i>T</i> = 56-200). (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142953930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A causal research pipeline and tutorial for psychologists and social scientists.","authors":"Matthew James Vowels","doi":"10.1037/met0000673","DOIUrl":"https://doi.org/10.1037/met0000673","url":null,"abstract":"<p><p>Causality is a fundamental part of the scientific endeavor to understand the world. Unfortunately, causality is still taboo in much of psychology and social science. Motivated by a growing number of recommendations for the importance of adopting causal approaches to research, we reformulate the typical approach to research in psychology to harmonize inevitably causal theories with the rest of the research pipeline. We present a new process which begins with the incorporation of techniques from the confluence of causal discovery and machine learning for the development, validation, and transparent formal specification of theories. We then present methods for reducing the complexity of the fully specified theoretical model into the fundamental submodel relevant to a given target hypothesis. From here, we establish whether or not the quantity of interest is estimable from the data, and if so, propose the use of semi-parametric machine learning methods for the estimation of causal effects. The overall goal is the presentation of a new research pipeline which can (a) facilitate scientific inquiry compatible with the desire to test causal theories (b) encourage transparent representation of our theories as unambiguous mathematical objects, (c) tie our statistical models to specific attributes of the theory, thus reducing under-specification problems frequently resulting from the theory-to-model gap, and (d) yield results and estimates which are causally meaningful and reproducible. The process is demonstrated through didactic examples with real-world data, and we conclude with a summary and discussion of limitations. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142932515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}