Arryn A Guy,Matthew J Murphy,David G Zelaya,Christopher W Kahler,Shufang Sun
{"title":"Data integrity in an online world: Demonstration of multimodal bot screening tools and considerations for preserving data integrity in two online social and behavioral research studies with marginalized populations.","authors":"Arryn A Guy,Matthew J Murphy,David G Zelaya,Christopher W Kahler,Shufang Sun","doi":"10.1037/met0000696","DOIUrl":"https://doi.org/10.1037/met0000696","url":null,"abstract":"Internet-based studies are widely used in social and behavioral health research, yet bots and fraud from \"survey farming\" bring significant threats to data integrity. For research centering marginalized communities, data integrity is an ethical imperative, as fraudulent data at a minimum poses a threat to scientific integrity, and worse could even promulgate false, negative stereotypes about the population of interest. Using data from two online surveys of sexual and gender minority populations (young men who have sex with men and transgender women of color), we (a) demonstrate the use of online survey techniques to identify and mitigate internet-based fraud, (b) differentiate techniques for and identify two different types of \"survey farming\" (i.e., bots and false responders), and (c) demonstrate the consequences of those distinct types of fraud on sample characteristics and statistical inferences, if fraud goes unaddressed. We provide practical recommendations for internet-based studies in psychological, social, and behavioral health research to ensure data integrity and discuss implications for future research testing data integrity techniques. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"1 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trying to outrun causality with machine learning: Limitations of model explainability techniques for exploratory research.","authors":"Matthew J Vowels","doi":"10.1037/met0000699","DOIUrl":"https://doi.org/10.1037/met0000699","url":null,"abstract":"Machine learning explainability techniques have been proposed as a means for psychologists to \"explain\" or interrogate a model in order to gain an understanding of a phenomenon of interest. Researchers concerned with imposing overly restrictive functional form (e.g., as would be the case in a linear regression) may be motivated to use machine learning algorithms in conjunction with explainability techniques, as part of exploratory research, with the goal of identifying important variables that are associated with/predictive of an outcome of interest. However, and as we demonstrate, machine learning algorithms are highly sensitive to the underlying causal structure in the data. The consequences of this are that predictors which are deemed by the explainability technique to be unrelated/unimportant/unpredictive, may actually be highly associated with the outcome. Rather than this being a limitation of explainability techniques per se, we show that it is rather a consequence of the mathematical implications of regression, and the interaction of these implications with the associated conditional independencies of the underlying causal structure. We provide some alternative recommendations for psychologists wanting to explore the data for important variables. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"9 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sequential analysis of variance: Increasing efficiency of hypothesis testing.","authors":"Meike Steinhilber,Martin Schnuerch,Anna-Lena Schubert","doi":"10.1037/met0000677","DOIUrl":"https://doi.org/10.1037/met0000677","url":null,"abstract":"Researchers commonly use analysis of variance (ANOVA) to statistically test results of factorial designs. Performing an a priori power analysis is crucial to ensure that the ANOVA is sufficiently powered, however, it often poses a challenge and can result in large sample sizes, especially if the expected effect size is small. Due to the high prevalence of small effect sizes in psychology, studies are frequently underpowered as it is often economically unfeasible to gather the necessary sample size for adequate Type-II error control. Here, we present a more efficient alternative to the fixed ANOVA, the so-called sequential ANOVA that we implemented in the R package \"sprtt.\" The sequential ANOVA is based on the sequential probability ratio test (SPRT) that uses a likelihood ratio as a test statistic and controls for long-term error rates. SPRTs gather evidence for both the null and the alternative hypothesis and conclude this process when a sufficient amount of evidence has been gathered to accept one of the two hypotheses. Through simulations, we show that the sequential ANOVA is more efficient than the fixed ANOVA and reliably controls long-term error rates. Additionally, robustness analyses revealed that the sequential and fixed ANOVAs exhibit analogous properties when their underlying assumptions are violated. Taken together, our results demonstrate that the sequential ANOVA is an efficient alternative to fixed sample designs for hypothesis testing. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"49 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Trafimow,Tingting Tong,Tonghui Wang,S T Boris Choy,Liqun Hu,Xiangfei Chen,Cong Wang,Ziyuan Wang
{"title":"Improving inferential analyses predata and postdata.","authors":"David Trafimow,Tingting Tong,Tonghui Wang,S T Boris Choy,Liqun Hu,Xiangfei Chen,Cong Wang,Ziyuan Wang","doi":"10.1037/met0000697","DOIUrl":"https://doi.org/10.1037/met0000697","url":null,"abstract":"The standard statistical procedure for researchers comprises a two-step process. Before data collection, researchers perform power analyses, and after data collection, they perform significance tests. Many have proffered arguments that significance tests are unsound, but that issue will not be rehashed here. It is sufficient that even for aficionados, there is the usual disclaimer that null hypothesis significance tests provide extremely limited information, thereby rendering them vulnerable to misuse. There is a much better postdata option that provides a higher grade of useful information. Based on work by Trafimow and his colleagues (for a review, see Trafimow, 2023a), it is possible to estimate probabilities of being better off or worse off, by varying degrees, depending on whether one gets the treatment or not. In turn, if the postdata goal switches from significance testing to a concern with probabilistic advantages or disadvantages, an implication is that the predata goal ought to switch accordingly. The a priori procedure, with its focus on parameter estimation, should replace conventional power analysis as a predata procedure. Therefore, the new two-step procedure should be the a priori procedure predata and estimations of probabilities of being better off, or worse off, to varying degrees, postdata. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"104 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Consistency of Bayes factor estimates in Bayesian analysis of variance.","authors":"Roland Pfister","doi":"10.1037/met0000703","DOIUrl":"https://doi.org/10.1037/met0000703","url":null,"abstract":"Factorial designs lend themselves to a variety of analyses with Bayesian methodology. The de facto standard is Bayesian analysis of variance (ANOVA) with Monte Carlo integration. Alternative, and readily available methods, are Bayesian ANOVA with Laplace approximation as well as Bayesian t tests for individual effects. This simulation study compared the three approaches regarding ordinal and metric agreement of the resulting Bayes factors for a 2 × 2 mixed design. Simulation results indicate remarkable disagreement of the three methods in certain cases, particularly when effect sizes are small and studies include small sample sizes. Findings further replicate and extend previous observations of substantial variability of ANOVAs with Monte Carlo integration across different runs of one and the same analysis. These observations showcase important limitations of current implementations of Bayesian ANOVA. Researchers should be mindful of these limitations when interpreting corresponding analyses, ideally applying multiple approaches to establish converging results. (PsycInfo Database Record (c) 2024 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"82 1","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142165971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling construct change over time amidst potential changes in construct measurement: A longitudinal moderated factor analysis approach.","authors":"Siyuan Marco Chen, Daniel J Bauer","doi":"10.1037/met0000685","DOIUrl":"https://doi.org/10.1037/met0000685","url":null,"abstract":"<p><p>In analyzing longitudinal data with growth curve models, a critical assumption is that changes in the observed measures reflect construct changes and not changes in the manifestation of the construct over time. However, growth curve models are often fit to a repeated measure constructed as a sum or mean of scale items, making an implicit assumption of constancy of measurement. This practice risks confounding actual construct change with changes in measurement (i.e., differential item functioning [DIF]), threatening the validity of conclusions. An improved method that avoids such confounding is the second-order growth curve (SGC) model. It specifies a measurement model at each occasion of measurement that can be evaluated for invariance over time. The applicability of the SGC model is hindered by key limitations: (a) the SGC model treats time as continuous when modeling construct growth but as discrete when modeling measurement, reducing interpretability and parsimony; (b) the evaluation of DIF becomes increasingly error-prone given multiple timepoints and groups; (c) DIF associated with continuous covariates is difficult to incorporate. Drawing on moderated nonlinear factor analysis, we propose an alternative approach that provides a parsimonious framework for including many time points and DIF from different types of covariates. We implement this model through Bayesian estimation, allowing for incorporation of regularizing priors to facilitate efficient evaluation of DIF. We demonstrate a two-step workflow of measurement evaluation and growth modeling, with an empirical example examining changes in adolescent delinquency over time. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142111363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-03-27DOI: 10.1037/met0000568
Se-Kang Kim
{"title":"Factorization of person response profiles to identify summative profiles carrying central response patterns.","authors":"Se-Kang Kim","doi":"10.1037/met0000568","DOIUrl":"10.1037/met0000568","url":null,"abstract":"<p><p>A data matrix, where rows represent persons and columns represent measured subtests, can be viewed as a stack of person profiles, as rows are actually person profiles of observed responses on column subtests. Profile analysis seeks to identify a small number of latent profiles from a large number of person response profiles to identify central response patterns, which are useful for assessing the strengths and weaknesses of individuals across multiple dimensions in domains of interest. Moreover, the latent profiles are mathematically proven to be summative profiles that linearly combine all person response profiles. Since person response profiles are confounded with profile level and response pattern, the level effect must be controlled when they are factorized to identify a latent (or summative) profile that carries the response pattern effect. However, when the level effect is dominant but uncontrolled, only a summative profile carrying the level effect would be considered statistically meaningful according to a traditional metric (e.g., eigenvalue ≥ 1) or parallel analysis results. Nevertheless, the response pattern effect among individuals can provide assessment-relevant insights that are overlooked by conventional analysis; to achieve this, the level effect must be controlled. Consequently, the purpose of this study is to demonstrate how to correctly identify summative profiles containing central response patterns regardless of the centering techniques used on data sets. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"723-730"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10016289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-06-12DOI: 10.1037/met0000585
Anja F Ernst, Casper J Albers, Marieke E Timmerman
{"title":"A comprehensive model framework for between-individual differences in longitudinal data.","authors":"Anja F Ernst, Casper J Albers, Marieke E Timmerman","doi":"10.1037/met0000585","DOIUrl":"10.1037/met0000585","url":null,"abstract":"<p><p>Across different fields of research, the similarities and differences between various longitudinal models are not always eminently clear due to differences in data structure, application area, and terminology. Here we propose a comprehensive model framework that will allow simple comparisons between longitudinal models, to ease their empirical application and interpretation. At the within-individual level, our model framework accounts for various attributes of longitudinal data, such as growth and decline, cyclical trends, and the dynamic interplay between variables over time. At the between-individual level, our framework contains continuous and categorical latent variables to account for between-individual differences. This framework encompasses several well-known longitudinal models, including multilevel regression models, growth curve models, growth mixture models, vector-autoregressive models, and multilevel vector-autoregressive models. The general model framework is specified and its key characteristics are illustrated using famous longitudinal models as concrete examples. Various longitudinal models are reviewed and it is shown that all these models can be united into our comprehensive model framework. Extensions to the model framework are discussed. Recommendations for selecting and specifying longitudinal models are made for empirical researchers who aim to account for between-individual differences. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"748-766"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9612872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-05-11DOI: 10.1037/met0000582
Udo Boehm, Nathan J Evans, Quentin F Gronau, Dora Matzke, Eric-Jan Wagenmakers, Andrew J Heathcote
{"title":"Inclusion Bayes factors for mixed hierarchical diffusion decision models.","authors":"Udo Boehm, Nathan J Evans, Quentin F Gronau, Dora Matzke, Eric-Jan Wagenmakers, Andrew J Heathcote","doi":"10.1037/met0000582","DOIUrl":"10.1037/met0000582","url":null,"abstract":"<p><p>Cognitive models provide a substantively meaningful quantitative description of latent cognitive processes. The quantitative formulation of these models supports cumulative theory building and enables strong empirical tests. However, the nonlinearity of these models and pervasive correlations among model parameters pose special challenges when applying cognitive models to data. Firstly, estimating cognitive models typically requires large hierarchical data sets that need to be accommodated by an appropriate statistical structure within the model. Secondly, statistical inference needs to appropriately account for model uncertainty to avoid overconfidence and biased parameter estimates. In the present work, we show how these challenges can be addressed through a combination of Bayesian hierarchical modeling and Bayesian model averaging. To illustrate these techniques, we apply the popular diffusion decision model to data from a collaborative selective influence study. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"625-655"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9796969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-08-01Epub Date: 2023-02-16DOI: 10.1037/met0000558
Guillermo Vallejo, María Paula Fernández, Pablo Esteban Livacic-Rojas
{"title":"Multivariate analysis of covariance for heterogeneous and incomplete data.","authors":"Guillermo Vallejo, María Paula Fernández, Pablo Esteban Livacic-Rojas","doi":"10.1037/met0000558","DOIUrl":"10.1037/met0000558","url":null,"abstract":"<p><p>This article discusses the robustness of the multivariate analysis of covariance (MANCOVA) test for an emergent variable system and proposes a modification of this test to obtain adequate information from heterogeneous normal observations. The proposed approach for testing potential effects in heterogeneous MANCOVA models can be adopted effectively, regardless of the degree of heterogeneity and sample size imbalance. As our method was not designed to handle missing values, we also show how to derive the formulas for pooling the results of multiple-imputation-based analyses into a single final estimate. Results of simulated studies and analysis of real-data show that the proposed combining rules provide adequate coverage and power. Based on the current evidence, the two solutions suggested could be effectively used by researchers for testing hypotheses, provided that the data conform to normality. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"731-747"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10787830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}