Olmo R van den Akker,Marjan Bakker,Marcel A L M van Assen,Charlotte R Pennington,Leone Verweij,Mahmoud M Elsherif,Aline Claesen,Stefan D M Gaillard,Siu Kit Yeung,Jan-Luca Frankenberger,Kai Krautter,Jamie P Cockcroft,Katharina S Kreuer,Thomas Rhys Evans,Frédérique M Heppel,Sarah F Schoch,Max Korbmacher,Yuki Yamada,Nihan Albayrak-Aydemir,Shilaan Alzahawi,Alexandra Sarafoglou,Maksim M Sitnikov,Filip Děchtěrenko,Sophia Wingen,Sandra Grinschgl,Helena Hartmann,Suzanne L K Stewart,Cátia M F de Oliveira,Sarah Ashcroft-Jones,Bradley J Baker,Jelte M Wicherts
{"title":"The potential of preregistration in psychology: Assessing preregistration producibility and preregistration-study consistency.","authors":"Olmo R van den Akker,Marjan Bakker,Marcel A L M van Assen,Charlotte R Pennington,Leone Verweij,Mahmoud M Elsherif,Aline Claesen,Stefan D M Gaillard,Siu Kit Yeung,Jan-Luca Frankenberger,Kai Krautter,Jamie P Cockcroft,Katharina S Kreuer,Thomas Rhys Evans,Frédérique M Heppel,Sarah F Schoch,Max Korbmacher,Yuki Yamada,Nihan Albayrak-Aydemir,Shilaan Alzahawi,Alexandra Sarafoglou,Maksim M Sitnikov,Filip Děchtěrenko,Sophia Wingen,Sandra Grinschgl,Helena Hartmann,Suzanne L K Stewart,Cátia M F de Oliveira,Sarah Ashcroft-Jones,Bradley J Baker,Jelte M Wicherts","doi":"10.1037/met0000687","DOIUrl":"https://doi.org/10.1037/met0000687","url":null,"abstract":"Study preregistration has become increasingly popular in psychology, but its potential to restrict researcher degrees of freedom has not yet been empirically verified. We used an extensive protocol to assess the producibility (i.e., the degree to which a study can be properly conducted based on the available information) of preregistrations and the consistency between preregistrations and their corresponding papers for 300 psychology studies. We found that preregistrations often lack methodological details and that undisclosed deviations from preregistered plans are frequent. These results highlight that biases due to researcher degrees of freedom remain possible in many preregistered studies. More comprehensive registration templates typically yielded more producible preregistrations. We did not find that the producibility and consistency of preregistrations differed over time or between original and replication studies. Furthermore, we found that operationalizations of variables were generally preregistered more producible and consistently than other study parts. Inconsistencies between preregistrations and published studies were mainly encountered for data collection procedures, statistical models, and exclusion criteria. Our results indicate that, to unlock the full potential of preregistration, researchers in psychology should aim to write more producible preregistrations, adhere to these preregistrations more faithfully, and more transparently report any deviations from their preregistrations. This could be facilitated by training and education to improve preregistration skills, as well as the development of more comprehensive templates. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"124 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142436379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lagged multidimensional recurrence quantification analysis for determining leader-follower relationships within multidimensional time series.","authors":"Alon Tomashin,Ilanit Gordon,Giuseppe Leonardi,Yair Berson,Nir Milstein,Matthias Ziegler,Ursula Hess,Sebastian Wallot","doi":"10.1037/met0000691","DOIUrl":"https://doi.org/10.1037/met0000691","url":null,"abstract":"The current article introduces lagged multidimensional recurrence quantification analysis. The method is an extension of multidimensional recurrence quantification analysis and allows to quantify the joint dynamics of multivariate time series and to investigate leader-follower relationships in behavioral and physiological data. Moreover, the method enables the quantification of the joint dynamics of a group, when such leader-follower relationships are taken into account. We first provide a formal presentation of the method, and then apply it to synthetic data, as well as data sets from joint action research, investigating the shared dynamics of facial expression and beats-per-minute recordings within different groups. A wrapper function is included, for applying the method together with the \"crqa\" package in R. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"85 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142436376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Harvesting heterogeneity: Selective expertise versus machine learning.","authors":"Rumen Iliev,Alex Filipowicz,Francine Chen,Nikos Arechiga,Scott Carter,Emily Sumner,Totte Harinen,Kate Sieck,Kent Lyons,Charlene Wu","doi":"10.1037/met0000640","DOIUrl":"https://doi.org/10.1037/met0000640","url":null,"abstract":"The heterogeneity of outcomes in behavioral research has long been perceived as a challenge for the validity of various theoretical models. More recently, however, researchers have started perceiving heterogeneity as something that needs to be not only acknowledged but also actively addressed, particularly in applied research. A serious challenge, however, is that classical psychological methods are not well suited for making practical recommendations when heterogeneous outcomes are expected. In this article, we argue that heterogeneity requires a separation between basic and applied behavioral methods, and between different types of behavioral expertise. We propose a novel framework for evaluating behavioral expertise and suggest that selective expertise can easily be automated via various machine learning methods. We illustrate the value of our framework via an empirical study of the preferences towards battery electric vehicles. Our results suggest that a basic multiarm bandit algorithm vastly outperforms human expertise in selecting the best interventions. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"23 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142386319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How to conduct an integrative mixed methods meta-analysis: A tutorial for the systematic review of quantitative and qualitative evidence.","authors":"Heidi M Levitt","doi":"10.1037/met0000675","DOIUrl":"https://doi.org/10.1037/met0000675","url":null,"abstract":"<p><p>This article is a guide on how to conduct mixed methods meta-analyses (sometimes called mixed methods systematic reviews, integrative meta-analyses, or integrative meta-syntheses), using an integrative approach. These aggregative methods allow researchers to synthesize qualitative and quantitative findings from a research literature in order to benefit from the strengths of both forms of analysis. The article articulates distinctions in how qualitative and quantitative methodologies work with variation to develop a coherent theoretical basis for their integration. In advancing this methodological approach to integrative mixed methods meta-analysis (IMMMA), I provide rationales for procedural decisions that support methodological integrity and address prior misconceptions that may explain why these methods have not been as commonly used as might be expected. Features of questions and subject matters that lead them to be amenable to this research approach are considered. The steps to conducting an IMMMA then are described, with illustrative examples, and in a manner open to the use of a range of qualitative and quantitative meta-analytic approaches. These steps include the development of research aims, the selection of primary research articles, the generation of units for analysis, and the development of themes and findings. The tutorial provides guidance on how to develop IMMMA findings that have methodological integrity and are based upon the appreciation of the distinctive approaches to modeling variation in quantitative and qualitative methodologies. The article concludes with guidance for report writing and developing principles for practice. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142366352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2023-04-27DOI: 10.1037/met0000564
Wen Wei Loh, Dongning Ren
{"title":"Data-driven covariate selection for confounding adjustment by focusing on the stability of the effect estimator.","authors":"Wen Wei Loh, Dongning Ren","doi":"10.1037/met0000564","DOIUrl":"10.1037/met0000564","url":null,"abstract":"<p><p>Valid inference of cause-and-effect relations in observational studies necessitates adjusting for common causes of the focal predictor (i.e., treatment) and the outcome. When such common causes, henceforth termed confounders, remain unadjusted for, they generate spurious correlations that lead to biased causal effect estimates. But routine adjustment for all available covariates, when only a subset are truly confounders, is known to yield potentially inefficient and unstable estimators. In this article, we introduce a data-driven confounder selection strategy that focuses on stable estimation of the treatment effect. The approach exploits the causal knowledge that after adjusting for confounders to eliminate all confounding biases, adding any remaining non-confounding covariates associated with only treatment or outcome, but not both, should not systematically change the effect estimator. The strategy proceeds in two steps. First, we prioritize covariates for adjustment by probing how strongly each covariate is associated with treatment and outcome. Next, we gauge the stability of the effect estimator by evaluating its trajectory adjusting for different covariate subsets. The smallest subset that yields a stable effect estimate is then selected. Thus, the strategy offers direct insight into the (in)sensitivity of the effect estimator to the chosen covariates for adjustment. The ability to correctly select confounders and yield valid causal inferences following data-driven covariate selection is evaluated empirically using extensive simulation studies. Furthermore, we compare the introduced method empirically with routine variable selection methods. Finally, we demonstrate the procedure using two publicly available real-world datasets. A step-by-step practical guide with user-friendly R functions is included. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"947-966"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9356535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2022-10-13DOI: 10.1037/met0000534
David Jendryczko, Fridtjof W Nussbeck
{"title":"Estimating and investigating multiple constructs multiple indicators social relations models with and without roles within the traditional structural equation modeling framework: A tutorial.","authors":"David Jendryczko, Fridtjof W Nussbeck","doi":"10.1037/met0000534","DOIUrl":"10.1037/met0000534","url":null,"abstract":"<p><p>The present contribution provides a tutorial for the estimation of the social relations model (SRM) by means of structural equation modeling (SEM). In the overarching SEM-framework, the SRM without roles (with interchangeable dyads) is derived as a more restrictive form of the SRM with roles (with noninterchangeable dyads). Starting with the simplest type of the SRM for one latent construct assessed by one manifest round-robin indicator, we show how the model can be extended to multiple constructs each measured by multiple indicators. We illustrate a multiple constructs multiple indicators SEM SRM both with and without roles with simulated data and explain the parameter interpretations. We present how testing the substantial model assumptions can be disentangled from testing the interchangeability of dyads. Additionally, we point out modeling strategies that adhere to cases in which only some members of a group can be differentiated with regards to their roles (i.e., only some group members are noninterchangeable). In the online supplemental materials, we provide concrete examples of specific modeling problems and their implementation into statistical software (Mplus, lavaan, and OpenMx). Advantages, caveats, possible extensions, and limitations in comparison with alternative modeling options are discussed. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"919-946"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9371931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2022-09-01DOI: 10.1037/met0000516
Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark
{"title":"Updated guidelines on selecting an intraclass correlation coefficient for interrater reliability, with applications to incomplete observational designs.","authors":"Debby Ten Hove, Terrence D Jorgensen, L Andries van der Ark","doi":"10.1037/met0000516","DOIUrl":"10.1037/met0000516","url":null,"abstract":"<p><p>Several intraclass correlation coefficients (ICCs) are available to assess the interrater reliability (IRR) of observational measurements. Selecting an ICC is complicated, and existing guidelines have three major limitations. First, they do not discuss incomplete designs, in which raters partially vary across subjects. Second, they provide no coherent perspective on the error variance in an ICC, clouding the choice between the available coefficients. Third, the distinction between fixed or random raters is often misunderstood. Based on generalizability theory (GT), we provide updated guidelines on selecting an ICC for IRR, which are applicable to both complete and incomplete observational designs. We challenge conventional wisdom about ICCs for IRR by claiming that raters should seldom (if ever) be considered fixed. Also, we clarify how to interpret ICCs in the case of unbalanced and incomplete designs. We explain four choices a researcher needs to make when selecting an ICC for IRR, and guide researchers through these choices by means of a flowchart, which we apply to three empirical examples from clinical and developmental domains. In the Discussion, we provide guidance in reporting, interpreting, and estimating ICCs, and propose future directions for research into the ICCs for IRR. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"967-979"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9290331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-10-01Epub Date: 2022-10-06DOI: 10.1037/met0000530
Kenneth A Bollen, Adam G Lilly, Lan Luo
{"title":"Selecting scaling indicators in structural equation models (sems).","authors":"Kenneth A Bollen, Adam G Lilly, Lan Luo","doi":"10.1037/met0000530","DOIUrl":"10.1037/met0000530","url":null,"abstract":"<p><p>It is common practice for psychologists to specify models with latent variables to represent concepts that are difficult to directly measure. Each latent variable needs a scale, and the most popular method of scaling as well as the default in most structural equation modeling (SEM) software uses a scaling or reference indicator. Much of the time, the choice of which indicator to use for this purpose receives little attention, and many analysts use the first indicator without considering whether there are better choices. When all indicators of the latent variable have essentially the same properties, then the choice matters less. But when this is not true, we could benefit from scaling indicator guidelines. Our article first demonstrates why latent variables need a scale. We then propose a set of criteria and accompanying diagnostic tools that can assist researchers in making informed decisions about scaling indicators. The criteria for a good scaling indicator include high face validity, high correlation with the latent variable, factor complexity of one, no correlated errors, no direct effects with other indicators, a minimal number of significant overidentification equation tests and modification indices, and invariance across groups and time. We demonstrate these criteria and diagnostics using two empirical examples and provide guidance on navigating conflicting results among criteria. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"868-889"},"PeriodicalIF":7.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10275390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9650749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiple imputation of missing data in large studies with many variables: A fully conditional specification approach using partial least squares.","authors":"Simon Grund, Oliver Lüdtke, Alexander Robitzsch","doi":"10.1037/met0000694","DOIUrl":"https://doi.org/10.1037/met0000694","url":null,"abstract":"<p><p>Multiple imputation (MI) is one of the most popular methods for handling missing data in psychological research. However, many imputation approaches are poorly equipped to handle a large number of variables, which are a common sight in studies that employ questionnaires to assess psychological constructs. In such a case, conventional imputation approaches often become unstable and require that the imputation model be simplified, for example, by removing variables or combining them into composite scores. In this article, we propose an alternative method that extends the fully conditional specification approach to MI with dimension reduction techniques such as partial least squares. To evaluate this approach, we conducted a series of simulation studies, in which we compared it with other approaches that were based on variable selection, composite scores, or dimension reduction through principal components analysis. Our findings indicate that this novel approach can provide accurate results even in challenging scenarios, where other approaches fail to do so. Finally, we also illustrate the use of this method in real data and discuss the implications of our findings for practice. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142352684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian estimation and comparison of idiographic network models.","authors":"Björn S Siepe, Matthias Kloft, Daniel W Heck","doi":"10.1037/met0000672","DOIUrl":"https://doi.org/10.1037/met0000672","url":null,"abstract":"<p><p>Idiographic network models are estimated on time series data of a single individual and allow researchers to investigate person-specific associations between multiple variables over time. The most common approach for fitting graphical vector autoregressive (GVAR) models uses least absolute shrinkage and selection operator (LASSO) regularization to estimate a contemporaneous and a temporal network. However, estimation of idiographic networks can be unstable in relatively small data sets typical for psychological research. This bears the risk of misinterpreting differences in estimated networks as spurious heterogeneity between individuals. As a remedy, we evaluate the performance of a Bayesian alternative for fitting GVAR models that allows for regularization of parameters while accounting for estimation uncertainty. We also develop a novel test, implemented in the tsnet package in R, which assesses whether differences between estimated networks are reliable based on matrix norms. We first compare Bayesian and LASSO approaches across a range of conditions in a simulation study. Overall, LASSO estimation performs well, while a Bayesian GVAR without edge selection may perform better when the true network is dense. In an additional simulation study, the novel test is conservative and shows good false-positive rates. Finally, we apply Bayesian estimation and testing in an empirical example using daily data on clinical symptoms for 40 individuals. We additionally provide functionality to estimate Bayesian GVAR models in Stan within tsnet. Overall, Bayesian GVAR modeling facilitates the assessment of estimation uncertainty which is important for studying interindividual differences of intraindividual dynamics. In doing so, the novel test serves as a safeguard against premature conclusions of heterogeneity. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142352683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}