Björn S Siepe, František Bartoš, Tim P Morris, Anne-Laure Boulesteix, Daniel W Heck, Samuel Pawel
{"title":"Simulation studies for methodological research in psychology: A standardized template for planning, preregistration, and reporting.","authors":"Björn S Siepe, František Bartoš, Tim P Morris, Anne-Laure Boulesteix, Daniel W Heck, Samuel Pawel","doi":"10.1037/met0000695","DOIUrl":"https://doi.org/10.1037/met0000695","url":null,"abstract":"<p><p>Simulation studies are widely used for evaluating the performance of statistical methods in psychology. However, the quality of simulation studies can vary widely in terms of their design, execution, and reporting. In order to assess the quality of typical simulation studies in psychology, we reviewed 321 articles published in <i>Psychological Methods, Behavior Research Methods, and Multivariate Behavioral Research</i> in 2021 and 2022, among which 100/321 = 31.2% report a simulation study. We find that many articles do not provide complete and transparent information about key aspects of the study, such as justifications for the number of simulation repetitions, Monte Carlo uncertainty estimates, or code and data to reproduce the simulation studies. To address this problem, we provide a summary of the ADEMP (aims, data-generating mechanism, estimands and other targets, methods, performance measures) design and reporting framework from Morris et al. (2019) adapted to simulation studies in psychology. Based on this framework, we provide ADEMP-PreReg, a step-by-step template for researchers to use when designing, potentially preregistering, and reporting their simulation studies. We give formulae for estimating common performance measures, their Monte Carlo standard errors, and for calculating the number of simulation repetitions to achieve a desired Monte Carlo standard error. Finally, we give a detailed tutorial on how to apply the ADEMP framework in practice using an example simulation study on the evaluation of methods for the analysis of pre-post measurement experiments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142626859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2023-04-27DOI: 10.1037/met0000564
Wen Wei Loh, Dongning Ren
{"title":"Data-driven covariate selection for confounding adjustment by focusing on the stability of the effect estimator.","authors":"Wen Wei Loh, Dongning Ren","doi":"10.1037/met0000564","DOIUrl":"10.1037/met0000564","url":null,"abstract":"<p><p>Valid inference of cause-and-effect relations in observational studies necessitates adjusting for common causes of the focal predictor (i.e., treatment) and the outcome. When such common causes, henceforth termed confounders, remain unadjusted for, they generate spurious correlations that lead to biased causal effect estimates. But routine adjustment for all available covariates, when only a subset are truly confounders, is known to yield potentially inefficient and unstable estimators. In this article, we introduce a data-driven confounder selection strategy that focuses on stable estimation of the treatment effect. The approach exploits the causal knowledge that after adjusting for confounders to eliminate all confounding biases, adding any remaining non-confounding covariates associated with only treatment or outcome, but not both, should not systematically change the effect estimator. The strategy proceeds in two steps. First, we prioritize covariates for adjustment by probing how strongly each covariate is associated with treatment and outcome. Next, we gauge the stability of the effect estimator by evaluating its trajectory adjusting for different covariate subsets. The smallest subset that yields a stable effect estimate is then selected. Thus, the strategy offers direct insight into the (in)sensitivity of the effect estimator to the chosen covariates for adjustment. The ability to correctly select confounders and yield valid causal inferences following data-driven covariate selection is evaluated empirically using extensive simulation studies. Furthermore, we compare the introduced method empirically with routine variable selection methods. Finally, we demonstrate the procedure using two publicly available real-world datasets. A step-by-step practical guide with user-friendly R functions is included. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"947-966"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9356535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2022-10-13DOI: 10.1037/met0000534
David Jendryczko, Fridtjof W Nussbeck
{"title":"Estimating and investigating multiple constructs multiple indicators social relations models with and without roles within the traditional structural equation modeling framework: A tutorial.","authors":"David Jendryczko, Fridtjof W Nussbeck","doi":"10.1037/met0000534","DOIUrl":"10.1037/met0000534","url":null,"abstract":"<p><p>The present contribution provides a tutorial for the estimation of the social relations model (SRM) by means of structural equation modeling (SEM). In the overarching SEM-framework, the SRM without roles (with interchangeable dyads) is derived as a more restrictive form of the SRM with roles (with noninterchangeable dyads). Starting with the simplest type of the SRM for one latent construct assessed by one manifest round-robin indicator, we show how the model can be extended to multiple constructs each measured by multiple indicators. We illustrate a multiple constructs multiple indicators SEM SRM both with and without roles with simulated data and explain the parameter interpretations. We present how testing the substantial model assumptions can be disentangled from testing the interchangeability of dyads. Additionally, we point out modeling strategies that adhere to cases in which only some members of a group can be differentiated with regards to their roles (i.e., only some group members are noninterchangeable). In the online supplemental materials, we provide concrete examples of specific modeling problems and their implementation into statistical software (Mplus, lavaan, and OpenMx). Advantages, caveats, possible extensions, and limitations in comparison with alternative modeling options are discussed. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"919-946"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9371931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2022-09-01DOI: 10.1037/met0000516
Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark
{"title":"Updated guidelines on selecting an intraclass correlation coefficient for interrater reliability, with applications to incomplete observational designs.","authors":"Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark","doi":"10.1037/met0000516","DOIUrl":"10.1037/met0000516","url":null,"abstract":"<p><p>Several intraclass correlation coefficients (ICCs) are available to assess the interrater reliability (IRR) of observational measurements. Selecting an ICC is complicated, and existing guidelines have three major limitations. First, they do not discuss incomplete designs, in which raters partially vary across subjects. Second, they provide no coherent perspective on the error variance in an ICC, clouding the choice between the available coefficients. Third, the distinction between fixed or random raters is often misunderstood. Based on generalizability theory (GT), we provide updated guidelines on selecting an ICC for IRR, which are applicable to both complete and incomplete observational designs. We challenge conventional wisdom about ICCs for IRR by claiming that raters should seldom (if ever) be considered fixed. Also, we clarify how to interpret ICCs in the case of unbalanced and incomplete designs. We explain four choices a researcher needs to make when selecting an ICC for IRR, and guide researchers through these choices by means of a flowchart, which we apply to three empirical examples from clinical and developmental domains. In the Discussion, we provide guidance in reporting, interpreting, and estimating ICCs, and propose future directions for research into the ICCs for IRR. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"967-979"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9290331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2022-10-06DOI: 10.1037/met0000530
Kenneth A Bollen, Adam G Lilly, Lan Luo
{"title":"Selecting scaling indicators in structural equation models (sems).","authors":"Kenneth A Bollen, Adam G Lilly, Lan Luo","doi":"10.1037/met0000530","DOIUrl":"10.1037/met0000530","url":null,"abstract":"<p><p>It is common practice for psychologists to specify models with latent variables to represent concepts that are difficult to directly measure. Each latent variable needs a scale, and the most popular method of scaling as well as the default in most structural equation modeling (SEM) software uses a scaling or reference indicator. Much of the time, the choice of which indicator to use for this purpose receives little attention, and many analysts use the first indicator without considering whether there are better choices. When all indicators of the latent variable have essentially the same properties, then the choice matters less. But when this is not true, we could benefit from scaling indicator guidelines. Our article first demonstrates why latent variables need a scale. We then propose a set of criteria and accompanying diagnostic tools that can assist researchers in making informed decisions about scaling indicators. The criteria for a good scaling indicator include high face validity, high correlation with the latent variable, factor complexity of one, no correlated errors, no direct effects with other indicators, a minimal number of significant overidentification equation tests and modification indices, and invariance across groups and time. We demonstrate these criteria and diagnostics using two empirical examples and provide guidance on navigating conflicting results among criteria. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"868-889"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10275390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9650749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-03-27DOI: 10.1037/met0000568
Se-Kang Kim
{"title":"Factorization of person response profiles to identify summative profiles carrying central response patterns.","authors":"Se-Kang Kim","doi":"10.1037/met0000568","DOIUrl":"10.1037/met0000568","url":null,"abstract":"<p><p>A data matrix, where rows represent persons and columns represent measured subtests, can be viewed as a stack of person profiles, as rows are actually person profiles of observed responses on column subtests. Profile analysis seeks to identify a small number of latent profiles from a large number of person response profiles to identify central response patterns, which are useful for assessing the strengths and weaknesses of individuals across multiple dimensions in domains of interest. Moreover, the latent profiles are mathematically proven to be summative profiles that linearly combine all person response profiles. Since person response profiles are confounded with profile level and response pattern, the level effect must be controlled when they are factorized to identify a latent (or summative) profile that carries the response pattern effect. However, when the level effect is dominant but uncontrolled, only a summative profile carrying the level effect would be considered statistically meaningful according to a traditional metric (e.g., eigenvalue ≥ 1) or parallel analysis results. Nevertheless, the response pattern effect among individuals can provide assessment-relevant insights that are overlooked by conventional analysis; to achieve this, the level effect must be controlled. Consequently, the purpose of this study is to demonstrate how to correctly identify summative profiles containing central response patterns regardless of the centering techniques used on data sets. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"723-730"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10016289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-06-12DOI: 10.1037/met0000585
Anja F Ernst, Casper J Albers, Marieke E Timmerman
{"title":"A comprehensive model framework for between-individual differences in longitudinal data.","authors":"Anja F Ernst, Casper J Albers, Marieke E Timmerman","doi":"10.1037/met0000585","DOIUrl":"10.1037/met0000585","url":null,"abstract":"<p><p>Across different fields of research, the similarities and differences between various longitudinal models are not always eminently clear due to differences in data structure, application area, and terminology. Here we propose a comprehensive model framework that will allow simple comparisons between longitudinal models, to ease their empirical application and interpretation. At the within-individual level, our model framework accounts for various attributes of longitudinal data, such as growth and decline, cyclical trends, and the dynamic interplay between variables over time. At the between-individual level, our framework contains continuous and categorical latent variables to account for between-individual differences. This framework encompasses several well-known longitudinal models, including multilevel regression models, growth curve models, growth mixture models, vector-autoregressive models, and multilevel vector-autoregressive models. The general model framework is specified and its key characteristics are illustrated using famous longitudinal models as concrete examples. Various longitudinal models are reviewed and it is shown that all these models can be united into our comprehensive model framework. Extensions to the model framework are discussed. Recommendations for selecting and specifying longitudinal models are made for empirical researchers who aim to account for between-individual differences. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"748-766"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9612872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-05-11DOI: 10.1037/met0000582
Udo Boehm, Nathan J Evans, Quentin F Gronau, Dora Matzke, Eric-Jan Wagenmakers, Andrew J Heathcote
{"title":"Inclusion Bayes factors for mixed hierarchical diffusion decision models.","authors":"Udo Boehm, Nathan J Evans, Quentin F Gronau, Dora Matzke, Eric-Jan Wagenmakers, Andrew J Heathcote","doi":"10.1037/met0000582","DOIUrl":"10.1037/met0000582","url":null,"abstract":"<p><p>Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the nonlinearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work, we show how these challenges can be addressed through a combination of Bayesian hierarchical modeling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"625-655"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9796969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-02-16DOI: 10.1037/met0000558
Guillermo Vallejo, María Paula Fernández, Pablo Esteban Livacic-Rojas
{"title":"Multivariate analysis of covariance for heterogeneous and incomplete data.","authors":"Guillermo Vallejo, María Paula Fernández, Pablo Esteban Livacic-Rojas","doi":"10.1037/met0000558","DOIUrl":"10.1037/met0000558","url":null,"abstract":"<p><p>This article discusses the robustness of the multivariate analysis of covariance (MANCOVA) test for an emergent variable system and proposes a modification of this test to obtain adequate information from heterogeneous normal observations. The proposed approach for testing potential effects in heterogeneous MANCOVA models can be adopted effectively, regardless of the degree of heterogeneity and sample size imbalance. As our method was not designed to handle missing values, we also show how to derive the formulas for pooling the results of multiple-imputation-based analyses into a single final estimate. Results of simulated studies and analysis of real-data show that the proposed combining rules provide adequate coverage and power. Based on the current evidence, the two solutions suggested could be effectively used by researchers for testing hypotheses, provided that the data conform to normality. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"731-747"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10787830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-04-13DOI: 10.1037/met0000569
Jillian C Strayhorn, Linda M Collins, David J Vanness
{"title":"A posterior expected value approach to decision-making in the multiphase optimization strategy for intervention science.","authors":"Jillian C Strayhorn, Linda M Collins, David J Vanness","doi":"10.1037/met0000569","DOIUrl":"10.1037/met0000569","url":null,"abstract":"<p><p>In current practice, intervention scientists applying the multiphase optimization strategy (MOST) with a 2<i><sup>k</sup></i> factorial optimization trial use a component screening approach (CSA) to select intervention components for inclusion in an optimized intervention. In this approach, scientists review all estimated main effects and interactions to identify the important ones based on a fixed threshold, and then base decisions about component selection on these important effects. We propose an alternative posterior expected value approach based on Bayesian decision theory. This new approach aims to be easier to apply and more readily extensible to a variety of intervention optimization problems. We used Monte Carlo simulation to evaluate the performance of a posterior expected value approach and CSA (automated for simulation purposes) relative to two benchmarks: random component selection, and the classical treatment package approach. We found that both the posterior expected value approach and CSA yielded substantial performance gains relative to the benchmarks. We also found that the posterior expected value approach outperformed CSA modestly but consistently in terms of overall accuracy, sensitivity, and specificity, across a wide range of realistic variations in simulated factorial optimization trials. We discuss implications for intervention optimization and promising future directions in the use of posterior expected value to make decisions in MOST. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"656-678"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9367545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}