Mirinda M Whitaker,Cindy S Bergeman,Pascal R Deboeck
{"title":"The Bayesian reservoir model of psychological regulation.","authors":"Mirinda M Whitaker,Cindy S Bergeman,Pascal R Deboeck","doi":"10.1037/met0000690","DOIUrl":"https://doi.org/10.1037/met0000690","url":null,"abstract":"Social and behavioral scientists are increasingly interested the dynamics of the processes they study. Despite the wide array of processes studied, a fairly narrow set of models are applied to characterize dynamics within these processes. For social and behavioral research to take the next step in modeling dynamics, a wider variety of models need to be considered. The reservoir model is one model of psychological regulation that helps expand the models available (Deboeck & Bergeman, 2013). The present article implements the Bayesian reservoir model for both single time series and multilevel data. Simulation 1 compares the performance of the original version of the reservoir model fit using structural equation modeling (Deboeck & Bergeman, 2013) to the proposed Bayesian estimation approach. Simulation 2 expands this to a multilevel data scenario and compares this to the single-level version. The Bayesian estimation approach performs substantially better than the original estimation approach and produces low-bias estimates even with time series as short as 25 observations. Combining Bayesian estimation with a multilevel modeling approach allows for relatively unbiased estimation with sample sizes as small as 15 individuals and/or with time series as short as 15 observations. Finally, a substantive example is presented that applies the Bayesian reservoir model to perceived stress, examining how the model parameters relate to psychological variables commonly expected to relate to resilience. The current expansion of the reservoir model demonstrates the benefits of leveraging the combined strengths of Bayesian estimation and multilevel modeling, with new dynamic models that have been tailored to match the process of psychological regulation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Clustering methods: To optimize or to not optimize?","authors":"Michael Brusco,Douglas Steinley,Ashley L Watts","doi":"10.1037/met0000688","DOIUrl":"https://doi.org/10.1037/met0000688","url":null,"abstract":"Many clustering problems are associated with a particular objective criterion that is sought to be optimized. There are often several methods that can be used to tackle the optimization problem, and one or more of them might guarantee a globally optimal solution. However, it is quite possible that, relative to one or more suboptimal solutions, a globally optimal solution might be less interpretable from the standpoint of psychological theory or be less in accordance with some known (i.e., true) cluster structure. For example, in simulation experiments, it has sometimes been observed that there is not a perfect correspondence between the optimized clustering criterion and recovery of the underlying known cluster structure. This can lead to the misconception that clustering methods with a tendency to produce suboptimal solutions might, in some instances, be preferable to superior methods that provide globally optimal (or at least better locally optimal) solutions. In this article, we present results from simulation studies in the context of K-median clustering where departure from global optimality was carefully controlled. Although the results showed that suboptimal solutions sometimes produced marginally better recovery for experimental cells where the known cluster structure was less well-defined, capriciously accepting inferior solutions is an unwise practice. However, there are instances in which some sacrifice in the optimization criterion value to meet certain desirable constraints or to improve the value of one or more other relevant criteria is principled. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"So is it better than something else? Using the results of a random-effects meta-analysis to characterize the magnitude of an effect size as a percentile.","authors":"Peter Boedeker,Gena Nelson,Hannah Carter","doi":"10.1037/met0000704","DOIUrl":"https://doi.org/10.1037/met0000704","url":null,"abstract":"The characterization of an effect size is best made in reference to effect sizes found in the literature. A random-effects meta-analysis is the systematic synthesis of related effects from across a literature, producing an estimate of the distribution of effects in the population. We propose using the estimated mean and variance from a random-effects meta-analysis to inform the characterization of an observed effect size. The percentile of an observed effect size within the estimated distribution of population effects can describe the magnitude of the observed effect. Because there is uncertainty in the population estimates, we propose using the prediction distribution (used frequently to estimate the prediction interval in a meta-analysis) to serve as the reference distribution when characterizing an effect size. Doing so, the percentile of an observed effect and the limits of the effect size's 95% confidence interval within the prediction distribution are calculated. With numerous meta-analyses available including various outcomes and contexts, the presented method can be useful to many researchers and practitioners. We demonstrate the application of an easy-to-use Excel worksheet to automate these percentile calculations. We follow this with a simulation study evaluating the method's performance over a range of conditions. Recommendations (and cautions) for meta-analysts and researchers conducting a single study are provided. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unidimensional community detection: A monte carlo simulation, grid search, and comparison.","authors":"Alexander P Christensen","doi":"10.1037/met0000692","DOIUrl":"https://doi.org/10.1037/met0000692","url":null,"abstract":"Unidimensionality is fundamental to psychometrics. Despite the recent focus on dimensionality assessment in network psychometrics, unidimensionality assessment remains a challenge. Community detection algorithms are the most common approach to estimate dimensionality in networks. Many community detection algorithms maximize an objective criterion called modularity. A limitation of modularity is that it penalizes unidimensional structures in networks, favoring two or more communities (dimensions). In this study, this penalization is discussed and a solution is offered. Then, a Monte Carlo simulation using one- and two-factor models is performed. Key to the simulation was the condition of model error or the misfit of the population factor model to the generated data. Based on previous simulation studies, several community detection algorithms that have performed well with unidimensional structures (Leading Eigenvalue, Leiden, Louvain, and Walktrap) were compared. A grid search was performed on the tunable parameters of these algorithms to determine the optimal trade-off between unidimensional and bidimensional recovery. The best-performing parameters for each algorithm were then compared against each other as well as maximum likelihood factor analysis and parallel analysis (PA) with mean and 95th percentile eigenvalues. Overall, the Leiden and Louvain algorithms and PA methods were the most accurate methods to recover unidimensional and bidimensional structures and were the most robust to model error. More nuanced method recommendations for specific unidimensional and bidimensional conditions are provided. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arryn A Guy,Matthew J Murphy,David G Zelaya,Christopher W Kahler,Shufang Sun
{"title":"Data integrity in an online world: Demonstration of multimodal bot screening tools and considerations for preserving data integrity in two online social and behavioral research studies with marginalized populations.","authors":"Arryn A Guy,Matthew J Murphy,David G Zelaya,Christopher W Kahler,Shufang Sun","doi":"10.1037/met0000696","DOIUrl":"https://doi.org/10.1037/met0000696","url":null,"abstract":"Internet-based studies are widely used in social and behavioral health research, yet bots and fraud from \"survey farming\" bring significant threats to data integrity. For research centering marginalized communities, data integrity is an ethical imperative, as fraudulent data at a minimum poses a threat to scientific integrity, and worse could even promulgate false, negative stereotypes about the population of interest. Using data from two online surveys of sexual and gender minority populations (young men who have sex with men and transgender women of color), we (a) demonstrate the use of online survey techniques to identify and mitigate internet-based fraud, (b) differentiate techniques for and identify two different types of \"survey farming\" (i.e., bots and false responders), and (c) demonstrate the consequences of those distinct types of fraud on sample characteristics and statistical inferences, if fraud goes unaddressed. We provide practical recommendations for internet-based studies in psychological, social, and behavioral health research to ensure data integrity and discuss implications for future research testing data integrity techniques. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trying to outrun causality with machine learning: Limitations of model explainability techniques for exploratory research.","authors":"Matthew J Vowels","doi":"10.1037/met0000699","DOIUrl":"https://doi.org/10.1037/met0000699","url":null,"abstract":"Machine learning explainability techniques have been proposed as a means for psychologists to \"explain\" or interrogate a model in order to gain an understanding of a phenomenon of interest. Researchers concerned with imposing overly restrictive functional form (e.g., as would be the case in a linear regression) may be motivated to use machine learning algorithms in conjunction with explainability techniques, as part of exploratory research, with the goal of identifying important variables that are associated with/predictive of an outcome of interest. However, and as we demonstrate, machine learning algorithms are highly sensitive to the underlying causal structure in the data. The consequences of this are that predictors which are deemed by the explainability technique to be unrelated/unimportant/unpredictive, may actually be highly associated with the outcome. Rather than this being a limitation of explainability techniques per se, we show that it is rather a consequence of the mathematical implications of regression, and the interaction of these implications with the associated conditional independencies of the underlying causal structure. We provide some alternative recommendations for psychologists wanting to explore the data for important variables. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sequential analysis of variance: Increasing efficiency of hypothesis testing.","authors":"Meike Steinhilber,Martin Schnuerch,Anna-Lena Schubert","doi":"10.1037/met0000677","DOIUrl":"https://doi.org/10.1037/met0000677","url":null,"abstract":"Researchers commonly use analysis of variance (ANOVA) to statistically test results of factorial designs. Performing an a priori power analysis is crucial to ensure that the ANOVA is sufficiently powered, however, it often poses a challenge and can result in large sample sizes, especially if the expected effect size is small. Due to the high prevalence of small effect sizes in psychology, studies are frequently underpowered as it is often economically unfeasible to gather the necessary sample size for adequate Type-II error control. Here, we present a more efficient alternative to the fixed ANOVA, the so-called sequential ANOVA that we implemented in the R package \"sprtt.\" The sequential ANOVA is based on the sequential probability ratio test (SPRT) that uses a likelihood ratio as a test statistic and controls for long-term error rates. SPRTs gather evidence for both the null and the alternative hypothesis and conclude this process when a sufficient amount of evidence has been gathered to accept one of the two hypotheses. Through simulations, we show that the sequential ANOVA is more efficient than the fixed ANOVA and reliably controls long-term error rates. Additionally, robustness analyses revealed that the sequential and fixed ANOVAs exhibit analogous properties when their underlying assumptions are violated. Taken together, our results demonstrate that the sequential ANOVA is an efficient alternative to fixed sample designs for hypothesis testing. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Trafimow,Tingting Tong,Tonghui Wang,S T Boris Choy,Liqun Hu,Xiangfei Chen,Cong Wang,Ziyuan Wang
{"title":"Improving inferential analyses predata and postdata.","authors":"David Trafimow,Tingting Tong,Tonghui Wang,S T Boris Choy,Liqun Hu,Xiangfei Chen,Cong Wang,Ziyuan Wang","doi":"10.1037/met0000697","DOIUrl":"https://doi.org/10.1037/met0000697","url":null,"abstract":"The standard statistical procedure for researchers comprises a two-step process. Before data collection, researchers perform power analyses, and after data collection, they perform significance tests. Many have proffered arguments that significance tests are unsound, but that issue will not be rehashed here. It is sufficient that even for aficionados, there is the usual disclaimer that null hypothesis significance tests provide extremely limited information, thereby rendering them vulnerable to misuse. There is a much better postdata option that provides a higher grade of useful information. Based on work by Trafimow and his colleagues (for a review, see Trafimow, 2023a), it is possible to estimate probabilities of being better off or worse off, by varying degrees, depending on whether one gets the treatment or not. In turn, if the postdata goal switches from significance testing to a concern with probabilistic advantages or disadvantages, an implication is that the predata goal ought to switch accordingly. The a priori procedure, with its focus on parameter estimation, should replace conventional power analysis as a predata procedure. Therefore, the new two-step procedure should be the a priori procedure predata and estimations of probabilities of being better off, or worse off, to varying degrees, postdata. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Consistency of Bayes factor estimates in Bayesian analysis of variance.","authors":"Roland Pfister","doi":"10.1037/met0000703","DOIUrl":"https://doi.org/10.1037/met0000703","url":null,"abstract":"Factorial designs lend themselves to a variety of analyses with Bayesian methodology. The de facto standard is Bayesian analysis of variance (ANOVA) with Monte Carlo integration. Alternative, and readily available methods, are Bayesian ANOVA with Laplace approximation as well as Bayesian t tests for individual effects. This simulation study compared the three approaches regarding ordinal and metric agreement of the resulting Bayes factors for a 2 × 2 mixed design. Simulation results indicate remarkable disagreement of the three methods in certain cases, particularly when effect sizes are small and studies include small sample sizes. Findings further replicate and extend previous observations of substantial variability of ANOVAs with Monte Carlo integration across different runs of one and the same analysis. These observations showcase important limitations of current implementations of Bayesian ANOVA. Researchers should be mindful of these limitations when interpreting corresponding analyses, ideally applying multiple approaches to establish converging results. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling construct change over time amidst potential changes in construct measurement: A longitudinal moderated factor analysis approach.","authors":"Siyuan Marco Chen, Daniel J Bauer","doi":"10.1037/met0000685","DOIUrl":"https://doi.org/10.1037/met0000685","url":null,"abstract":"<p><p>In analyzing longitudinal data with growth curve models, a critical assumption is that changes in the observed measures reflect construct changes and not changes in the manifestation of the construct over time. However, growth curve models are often fit to a repeated measure constructed as a sum or mean of scale items, making an implicit assumption of constancy of measurement. This practice risks confounding actual construct change with changes in measurement (i.e., differential item functioning [DIF]), threatening the validity of conclusions. An improved method that avoids such confounding is the second-order growth curve (SGC) model. It specifies a measurement model at each occasion of measurement that can be evaluated for invariance over time. The applicability of the SGC model is hindered by key limitations: (a) the SGC model treats time as continuous when modeling construct growth but as discrete when modeling measurement, reducing interpretability and parsimony; (b) the evaluation of DIF becomes increasingly error-prone given multiple timepoints and groups; (c) DIF associated with continuous covariates is difficult to incorporate. Drawing on moderated nonlinear factor analysis, we propose an alternative approach that provides a parsimonious framework for including many time points and DIF from different types of covariates. We implement this model through Bayesian estimation, allowing for incorporation of regularizing priors to facilitate efficient evaluation of DIF. We demonstrate a two-step workflow of measurement evaluation and growth modeling, with an empirical example examining changes in adolescent delinquency over time. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142111363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}