Lennart Seizer, Günter Schiepek, Germaine Cornelissen, Johanna Löchner
{"title":"A primer on sampling rates of ambulatory assessments.","authors":"Lennart Seizer, Günter Schiepek, Germaine Cornelissen, Johanna Löchner","doi":"10.1037/met0000656","DOIUrl":"https://doi.org/10.1037/met0000656","url":null,"abstract":"<p><p>The use of ambulatory assessments (AAs) as an approach to gather self-reported questionnaires or self-collected biochemical data is constantly increasing to investigate the experiences, states, and behaviors of individuals and their interaction with external situational factors during everyday life. It is often implicitly assumed that data from different sampling protocols can be used interchangeably, despite them assessing processes over different timescales in different intervals and at different occasions, which depending on the variables under study may result in fundamentally different dynamics. There are multiple temporal parameters to consider and while there is an abundance of sampling protocols that are applied regularly, to date, there is only limited empirical background on the influence different approaches may have on the data and findings. In this review, we aim to give an overview of commonly used types of AA in psychology, psychiatry, and biobehavioral research with a breakdown by temporal design parameters. Additionally, we discuss potential advantages and pitfalls associated with the various approaches. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141180102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can cross-lagged panel modeling be relied on to establish cross-lagged effects? The case of contemporaneous and reciprocal effects.","authors":"Bengt Muthén, Tihomir Asparouhov","doi":"10.1037/met0000661","DOIUrl":"https://doi.org/10.1037/met0000661","url":null,"abstract":"<p><p>This article considers identification, estimation, and model fit issues for models with contemporaneous and reciprocal effects. It explores how well the models work in practice using Monte Carlo studies as well as real-data examples. Furthermore, by using models that allow contemporaneous and reciprocal effects, the paper raises a fundamental question about current practice for cross-lagged panel modeling using models such as cross-lagged panel model (CLPM) or random intercept cross-lagged panel model (RI-CLPM): Can cross-lagged panel modeling be relied on to establish cross-lagged effects? The article concludes that the answer is no, a finding that has important ramifications for current practice. It is suggested that analysts should use additional models to probe the temporalities of the CLPM and RI-CLPM effects to see if these could be considered contemporaneous rather than lagged. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141180162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Etz, Adriana F Chávez de la Peña, Luis Baroja, Kathleen Medriano, Joachim Vandekerckhove
{"title":"The HDI + ROPE decision rule is logically incoherent but we can fix it.","authors":"Alexander Etz, Adriana F Chávez de la Peña, Luis Baroja, Kathleen Medriano, Joachim Vandekerckhove","doi":"10.1037/met0000660","DOIUrl":"https://doi.org/10.1037/met0000660","url":null,"abstract":"<p><p>The Bayesian highest-density interval plus region of practical equivalence (HDI + ROPE) decision rule is an increasingly common approach to testing null parameter values. The decision procedure involves a comparison between a posterior highest-density interval (HDI) and a prespecified region of practical equivalence. One then accepts or rejects the null parameter value depending on the overlap (or lack thereof) between these intervals. Here, we demonstrate, both theoretically and through examples, that this procedure is logically incoherent. Because the HDI is not transformation invariant, the ultimate inferential decision depends on statistically arbitrary and scientifically irrelevant properties of the statistical model. The incoherence arises from a common confusion between probability density and probability proper. The HDI + ROPE procedure relies on characterizing posterior densities as opposed to being based directly on probability. We conclude with recommendations for alternative Bayesian testing procedures that do not exhibit this pathology and provide a \"quick fix\" in the form of quantile intervals. This article is the work of the authors and is reformatted from the original, which was published under a <i>CC-BY Attribution 4.0 International</i> license and is available at https://psyarxiv.com/5p2qt/. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting mediation effects with the Bayes factor: Performance evaluation and tools for sample size determination.","authors":"Xiao Liu, Zhiyong Zhang, Lijuan Wang","doi":"10.1037/met0000670","DOIUrl":"https://doi.org/10.1037/met0000670","url":null,"abstract":"<p><p>Testing the presence of mediation effects is important in social science research. Recently, Bayesian hypothesis testing with Bayes factors (BFs) has become increasingly popular. However, the use of BFs for testing mediation effects is still under-studied, despite the growing literature on Bayesian mediation analysis. In this study, we systematically examine the performance of the BF for testing the presence versus absence of a mediation effect. Our results showed that the false and/or true positive rates of detecting mediation with the BF can be impacted by the prior specification, including the prior odds of the presence of each path (treatment-mediator path or mediator-outcome path) used in the design stage for data generation and in the analysis stage for calculating the BF of the mediation effect. Based on our examination, we developed an R function and a web application to determine sample sizes for testing mediation effects with the BF. Our study provides insights on the performance of the BF for testing mediation effects and adds to researchers' toolbox of sample size determination for mediation studies. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
José Ángel Martínez-Huertas, Eduardo Estrada, Ricardo Olmos
{"title":"Estimation of planned and unplanned missing individual scores in longitudinal designs using continuous-time state-space models.","authors":"José Ángel Martínez-Huertas, Eduardo Estrada, Ricardo Olmos","doi":"10.1037/met0000664","DOIUrl":"https://doi.org/10.1037/met0000664","url":null,"abstract":"<p><p>Latent change score (LCS) models within a continuous-time state-space modeling framework provide a convenient statistical approach for analyzing developmental data. In this study, we evaluate the robustness of such an approach in the context of accelerated longitudinal designs (ALDs). ALDs are especially interesting because they imply a very high rate of planned data missingness. Additionally, most longitudinal studies present unexpected participant attrition leading to unplanned missing data. Therefore, in ALDs, both sources of data missingness are combined. Previous research has shown that ALDs for developmental research allow recovering the population generating process. However, it is unknown how participant attrition impacts the model estimates. We have three goals: (a) to evaluate the robustness of the group-level parameter estimates in scenarios with empirically plausible unplanned data missingness; (b) to evaluate the performance of Kalman scores (KS) imputations for individual data points that were expected but unobserved; and (c) to evaluate the performance of KS imputations for individual data points that were outside the age ranged observed for each case (i.e., to estimate the individual trajectories for the complete age range under study). In general, results showed lack of bias in the simulated conditions. The variability of the estimates increased with lower sample sizes and higher missingness severity. Similarly, we found very accurate estimates of individual scores for both planned and unplanned missing data points. These results are very important for applied practitioners in terms of forecasting and making individual-level decisions. R code is provided to facilitate its implementation by applied researchers. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140945722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lasse Elsemüller, Martin Schnuerch, Paul-Christian Bürkner, Stefan T Radev
{"title":"A deep learning method for comparing Bayesian hierarchical models.","authors":"Lasse Elsemüller, Martin Schnuerch, Paul-Christian Bürkner, Stefan T Radev","doi":"10.1037/met0000645","DOIUrl":"https://doi.org/10.1037/met0000645","url":null,"abstract":"<p><p>Bayesian model comparison (BMC) offers a principled approach to assessing the relative merits of competing computational models and propagating uncertainty into model selection decisions. However, BMC is often intractable for the popular class of hierarchical models due to their high-dimensional nested parameter structure. To address this intractability, we propose a deep learning method for performing BMC on any set of hierarchical models which can be instantiated as probabilistic programs. Since our method enables amortized inference, it allows efficient re-estimation of posterior model probabilities and fast performance validation prior to any real-data application. In a series of extensive validation studies, we benchmark the performance of our method against the state-of-the-art bridge sampling method and demonstrate excellent amortized inference across all BMC settings. We then showcase our method by comparing four hierarchical evidence accumulation models that have previously been deemed intractable for BMC due to partly implicit likelihoods. Additionally, we demonstrate how transfer learning can be leveraged to enhance training efficiency. We provide reproducible code for all analyses and an open-source implementation of our method. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140857735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are factor scores measurement invariant?","authors":"Mark H C Lai, Winnie W-Y Tse","doi":"10.1037/met0000658","DOIUrl":"https://doi.org/10.1037/met0000658","url":null,"abstract":"<p><p>There has been increased interest in practical methods for integrative analysis of data from multiple studies or samples, and using factor scores to represent constructs has become a popular and practical alternative to latent variable models with all individual items. Although researchers are aware that scores representing the same construct should be on a similar metric across samples-namely they should be measurement invariant-for integrative data analysis, the methodological literature is unclear whether factor scores would satisfy such a requirement. In this note, we show that even when researchers successfully calibrate the latent factors to the same metric across samples, factor scores-which are estimates of the latent factors but not the factors themselves-may not be measurement invariant. Specifically, we prove that factor scores computed based on the popular regression method are generally not measurement invariant. Surprisingly, such scores can be noninvariant even when the items are invariant. We also demonstrate that our conclusions generalize to similar shrinkage scores in item response models for discrete items, namely the expected a posteriori scores and the maximum a posteriori scores. Researchers should be cautious in directly using factor scores for cross-sample analyses, even when such scores are obtained from measurement models that account for noninvariance. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140857844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-04-01Epub Date: 2022-04-11DOI: 10.1037/met0000490
Michael T Carlin, Mack S Costello, Madisyn A Flansburg, Alyssa Darden
{"title":"Reconsideration of the type I error rate for psychological science in the era of replication.","authors":"Michael T Carlin, Mack S Costello, Madisyn A Flansburg, Alyssa Darden","doi":"10.1037/met0000490","DOIUrl":"10.1037/met0000490","url":null,"abstract":"<p><p>Careful consideration of the tradeoff between Type I and Type II error rates when designing experiments is critical for maximizing statistical decision accuracy. Typically, Type I error rates (e.g., .05) are significantly lower than Type II error rates (e.g., .20 for .80 power) in psychological science. Further, positive findings (true effects and Type I errors) are more likely to be the focus of replication. This conventional approach leads to very high rates of Type II error. Analyses show that increasing the Type I error rate to .10, thereby increasing power and decreasing the Type II error rate for each test, leads to higher overall rates of correct statistical decisions. This increase of Type I error rate is consistent with, and most beneficial in the context of, the replication and \"New Statistics\" movements in psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"1 1","pages":"379-387"},"PeriodicalIF":7.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41346077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-04-01Epub Date: 2022-04-14DOI: 10.1037/met0000482
Sandipan Pramanik, Valen E Johnson
{"title":"Efficient alternatives for Bayesian hypothesis tests in psychology.","authors":"Sandipan Pramanik, Valen E Johnson","doi":"10.1037/met0000482","DOIUrl":"10.1037/met0000482","url":null,"abstract":"<p><p>Bayesian hypothesis testing procedures have gained increased acceptance in recent years. A key advantage that Bayesian tests have over classical testing procedures is their potential to quantify information in support of true null hypotheses. Ironically, default implementations of Bayesian tests prevent the accumulation of strong evidence in favor of true null hypotheses because associated default alternative hypotheses assign a high probability to data that are most consistent with a null effect. We propose the use of \"nonlocal\" alternative hypotheses to resolve this paradox. The resulting class of Bayesian hypothesis tests permits more rapid accumulation of evidence in favor of both true null hypotheses and alternative hypotheses that are compatible with standardized effect sizes of most interest in psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"243-261"},"PeriodicalIF":7.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9561355/pdf/nihms-1808497.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41210713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-04-01Epub Date: 2022-07-04DOI: 10.1037/met0000501
Laura Kolbe, Dylan Molenaar, Suzanne Jak, Terrence D Jorgensen
{"title":"Assessing measurement invariance with moderated nonlinear factor analysis using the R package OpenMx.","authors":"Laura Kolbe, Dylan Molenaar, Suzanne Jak, Terrence D Jorgensen","doi":"10.1037/met0000501","DOIUrl":"10.1037/met0000501","url":null,"abstract":"<p><p>Assessing measurement invariance is an important step in establishing a meaningful comparison of measurements of a latent construct across individuals or groups. Most recently, moderated nonlinear factor analysis (MNLFA) has been proposed as a method to assess measurement invariance. In MNLFA models, measurement invariance is examined in a single-group confirmatory factor analysis model by means of parameter moderation. The advantages of MNLFA over other methods is that it (a) accommodates the assessment of measurement invariance across multiple continuous and categorical background variables and (b) accounts for heteroskedasticity by allowing the factor and residual variances to differ as a function of the background variables. In this article, we aim to make MNLFA more accessible to researchers without access to commercial structural equation modeling software by demonstrating how this method can be applied with the open-source R package OpenMx. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"388-406"},"PeriodicalIF":7.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9577801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}