{"title":"Estimating the change in meta-analytic effect size estimates after the application of publication bias adjustment methods.","authors":"Martina Sladekova, Lois E A Webb, Andy P Field","doi":"10.1037/met0000470","DOIUrl":"https://doi.org/10.1037/met0000470","url":null,"abstract":"<p><p>Publication bias poses a challenge for accurately synthesizing research findings using meta-analysis. A number of statistical methods have been developed to combat this problem by adjusting the meta-analytic estimates. Previous studies tended to apply these methods without regard to optimal conditions for each method's performance. The present study sought to estimate the typical effect size attenuation of these methods when they are applied to real meta-analytic data sets that match the conditions under which each method is known to remain relatively unbiased (such as sample size, level of heterogeneity, population effect size, and the level of publication bias). Four-hundred and 33 data sets from 90 articles published in psychology journals were reanalyzed using a selection of publication bias adjustment methods. The downward adjustment found in our sample was minimal, with greatest identified attenuation of <i>b</i> = -.032, 95% highest posterior density interval (HPD) ranging from -.055 to -.009, for the precision effect test (PET). Some methods tended to adjust upward, and this was especially true for data sets with a sample size smaller than 10. We propose that researchers should seek to explore the full range of plausible estimates for the effects they are studying and note that these methods may not be able to combat bias in small samples (with less than 10 primary studies). We argue that although the effect size attenuation we found tended to be minimal, this should not be taken as an indication of low levels of publication bias in psychology. We discuss the findings with reference to new developments in Bayesian methods for publication bias adjustment, and the recent methodological reforms in psychology. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"664-686"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10002718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Linear equality constraints: Reformulations of criterion related profile analysis with extensions to moderated regression for multiple groups.","authors":"Mark L Davison, Ernest C Davenport, Hao Jia","doi":"10.1037/met0000430","DOIUrl":"https://doi.org/10.1037/met0000430","url":null,"abstract":"<p><p>Criterion-related profile analysis (CPA) is a least squares linear regression technique for identifying a criterion-related pattern (CRP) among predictor variables and for quantifying the variance accounted for by the pattern. A CRP is a pattern, described by a vector of contrast coefficients, such that predictor profiles with higher similarity to the pattern have higher expected criterion scores. A review of applications shows that researchers have extended the analysis to meta-analyses, logit regression, canonical regression, and structural equation modeling. It also reveals a need for better methods of comparing CRPs across populations. While the original method for identifying the CRP tends to underestimate the variance accounted for by pattern only, both the pattern identified by the original method and the pattern identified by the new method proposed here have useful and complementary interpretations. Imposing linear equality constraints on regression coefficients yields a more accurate method of estimating the variance accounted for by pattern only, and this constrained approach leads to moderated regression models for investigating whether the CRP is the same in two or more populations. Finally, we show how the elements in Cronbach and Gleser's (1953) classic profile decomposition are related to the linear regression model and the CPA model. Academic ability tests as predictors of college GPA are used to illustrate the analyses. Implications of the profile pattern models for psychological theory and applied decision-making are discussed. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"600-612"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10019028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spectral and cross-spectral analysis-A tutorial for psychologists and social scientists.","authors":"Matthew J Vowels, Laura M Vowels, Nathan D Wood","doi":"10.1037/met0000399","DOIUrl":"https://doi.org/10.1037/met0000399","url":null,"abstract":"<p><p>Social scientists have become increasingly interested in using intensive longitudinal methods to study social phenomena that change over time. Many of these phenomena are expected to exhibit cycling fluctuations (e.g., sleep, mood, sexual desire). However, researchers typically employ analytical methods which are unable to model such patterns. We present spectral and cross-spectral analysis as means to address this limitation. Spectral analysis provides a means to interrogate time series from a different, frequency domain perspective, and to understand how the time series may be decomposed into their constituent periodic components. Cross-spectral extends this to dyadic data and allows for synchrony and time offsets to be identified. The techniques are commonly used in the physical and engineering sciences, and we discuss how to apply these popular analytical techniques to the social sciences while also demonstrating how to undertake estimations of significance and effect size. In this tutorial we begin by introducing spectral and cross-spectral analysis, before demonstrating its application to simulated univariate and bivariate individual- and group-level data. We employ cross-power spectral density techniques to understand synchrony between the individual time series in a dyadic time series, and circular statistics and polar plots to understand phase offsets between constituent periodic components. Finally, we present a means to undertake nonparameteric bootstrapping in order to estimate the significance, and derive a proxy for effect size. A Jupyter Notebook (Python 3.6) is provided as supplementary material to aid researchers who intend to apply these techniques. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"631-650"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9649808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perspectives on Bayesian inference and their implications for data analysis.","authors":"Roy Levy, Daniel McNeish","doi":"10.1037/met0000443","DOIUrl":"https://doi.org/10.1037/met0000443","url":null,"abstract":"<p><p>Use of Bayesian methods has proliferated in recent years as technological and software developments have made Bayesian methods more approachable for researchers working with empirical data. Connected with the increased usage of Bayesian methods in empirical studies is a corresponding increase in recommendations and best practices for Bayesian methods. However, given the extensive scope of Bayes, theorem, there are various compelling perspectives one could adopt for its application. This paper first describes five different perspectives, including examples of different methodologies that are aligned within these perspectives. We then discuss how the different perspectives can have implications for modeling and reporting practices, such that approaches and recommendations that are perfectly reasonable under one perspective might be unreasonable when viewed from another perspective. The ultimate goal is to show the heterogeneity of defensible practices in Bayesian methods and to foster a greater appreciation for the variety of orientations that exist. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"719-739"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9643868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joseph H Grochowalski, Ezgi Ayturk, Amy Hendrickson
{"title":"Multidimensional nonadditivity in one-facet g-theory designs: A profile analytic approach.","authors":"Joseph H Grochowalski, Ezgi Ayturk, Amy Hendrickson","doi":"10.1037/met0000452","DOIUrl":"https://doi.org/10.1037/met0000452","url":null,"abstract":"<p><p>We introduce a new method for estimating the degree of nonadditivity in a one-facet generalizability theory design. One-facet G-theory designs have only one observation per cell, such as persons answering items in a test, and assume that there is no interaction between facets. When there is interaction, the model becomes nonadditive, and G-theory variance estimates and reliability coefficients are likely biased. We introduce a multidimensional method for detecting interaction and nonadditivity in G-theory that has less bias and smaller error variance than methods that use the one-degree of freedom method based on Tukey's test for nonadditivity. The method we propose is more flexible and detects a greater variety of interactions than the formulation based on Tukey's test. Further, the proposed method is descriptive and illustrates the nature of the facet interaction using profile analysis, giving insight into potential interaction like rater biases, DIF, threats to test security, and other possible sources of systematic construct-irrelevant variance. We demonstrate the accuracy of our method using a simulation study and illustrate its descriptive profile features with a real data analysis of neurocognitive test scores. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"651-663"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10002274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haley E Yaremych, Kristopher J Preacher, Donald Hedeker
{"title":"Centering categorical predictors in multilevel models: Best practices and interpretation.","authors":"Haley E Yaremych, Kristopher J Preacher, Donald Hedeker","doi":"10.1037/met0000434","DOIUrl":"https://doi.org/10.1037/met0000434","url":null,"abstract":"<p><p>The topic of centering in multilevel modeling (MLM) has received substantial attention from methodologists, as different centering choices for lower-level predictors present important ramifications for the estimation and interpretation of model parameters. However, the centering literature has focused almost exclusively on continuous predictors, with little attention paid to whether and how categorical predictors should be centered, despite their ubiquity across applied fields. Alongside this gap in the methodological literature, a review of applied articles showed that researchers center categorical predictors infrequently and inconsistently. Algebraically and statistically, continuous and categorical predictors behave the same, but researchers using them do not, and for many, interpreting the effects of categorical predictors is not intuitive. Thus, the goals of this tutorial article are twofold: to clarify why and how categorical predictors should be centered in MLM, and to explain how multilevel regression coefficients resulting from centered categorical predictors should be interpreted. We first provide algebraic support showing that uncentered coding variables result in a conflated blend of the within- and between-cluster effects of a multicategorical predictor, whereas appropriate centering techniques yield level-specific effects. Next, we provide algebraic derivations to illuminate precisely how the within- and between-cluster effects of a multicategorical predictor should be interpreted under dummy, contrast, and effect coding schemes. Finally, we provide a detailed demonstration of our conclusions with an empirical example. Implications for practice, including relevance of our findings to categorical control variables (i.e., covariates), interaction terms with categorical focal predictors, and multilevel latent variable models, are discussed. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"613-630"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9646799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An introductory guide for conducting psychological research with big data.","authors":"Michela Vezzoli, Cristina Zogmaister","doi":"10.1037/met0000513","DOIUrl":"https://doi.org/10.1037/met0000513","url":null,"abstract":"<p><p>Big Data can bring enormous benefits to psychology. However, many psychological researchers show skepticism in undertaking Big Data research. Psychologists often do not take Big Data into consideration while developing their research projects because they have difficulties imagining how Big Data could help in their specific field of research, imagining themselves as \"Big Data scientists,\" or for lack of specific knowledge. This article provides an introductory guide for conducting Big Data research for psychologists who are considering using this approach and want to have a general idea of its processes. By taking the Knowledge Discovery from Database steps as the <i>fil rouge</i>, we provide useful indications for finding data suitable for psychological investigations, describe how these data can be preprocessed, and list some techniques to analyze them and programming languages (R and Python) through which all these steps can be realized. In doing so, we explain the concepts with the terminology and take examples from psychology. For psychologists, familiarizing with the language of data science is important because it may appear difficult and esoteric at first approach. As Big Data research is often multidisciplinary, this overview helps build a general insight into the research steps and a common language, facilitating collaboration across different fields. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"580-599"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9728336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximilian Linde, Jorge N Tendeiro, Ravi Selker, Eric-Jan Wagenmakers, Don van Ravenzwaaij
{"title":"Decisions about equivalence: A comparison of TOST, HDI-ROPE, and the Bayes factor.","authors":"Maximilian Linde, Jorge N Tendeiro, Ravi Selker, Eric-Jan Wagenmakers, Don van Ravenzwaaij","doi":"10.1037/met0000402","DOIUrl":"https://doi.org/10.1037/met0000402","url":null,"abstract":"<p><p>Some important research questions require the ability to find evidence for two conditions being practically equivalent. This is impossible to accomplish within the traditional frequentist null hypothesis significance testing framework; hence, other methodologies must be utilized. We explain and illustrate three approaches for finding evidence for equivalence: The frequentist two one-sided tests procedure, the Bayesian highest density interval region of practical equivalence procedure, and the Bayes factor interval null procedure. We compare the classification performances of these three approaches for various plausible scenarios. The results indicate that the Bayes factor interval null approach compares favorably to the other two approaches in terms of statistical power. Critically, compared with the Bayes factor interval null procedure, the two one-sided tests and the highest density interval region of practical equivalence procedures have limited discrimination capabilities when the sample size is relatively small: Specifically, in order to be practically useful, these two methods generally require over 250 cases within each condition when rather large equivalence margins of approximately .2 or .3 are used; for smaller equivalence margins even more cases are required. Because of these results, we recommend that researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence, especially for studies that are constrained on sample size. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"740-755"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10002258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurement invariance, selection invariance, and fair selection revisited.","authors":"Remco Heesen, Jan-Willem Romeijn","doi":"10.1037/met0000491","DOIUrl":"https://doi.org/10.1037/met0000491","url":null,"abstract":"<p><p>This note contains a corrective and a generalization of results by Borsboom et al. (2008), based on Heesen and Romeijn (2019). It highlights the relevance of insights from psychometrics beyond the context of psychological testing. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 3","pages":"687-690"},"PeriodicalIF":7.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9644365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What are the mathematical bounds for coefficient α?","authors":"Niels Waller, William Revelle","doi":"10.1037/met0000583","DOIUrl":"https://doi.org/10.1037/met0000583","url":null,"abstract":"<p><p>Coefficient α, although ubiquitous in the research literature, is frequently criticized for being a poor estimate of test reliability. In this note, we consider the range of α and prove that it has no lower bound (i.e., α ∈ ( - ∞, 1]). While outlining our proofs, we present algorithms for generating data sets that will yield any fixed value of α in its range. We also prove that for some data sets-even those with appreciable item correlations-α is undefined. Although α is a putative estimate of the correlation between parallel forms, it is not a correlation as α can assume any value below-1 (and α values below 0 are nonsensical reliability estimates). In the online supplemental materials, we provide R code for replicating our empirical findings and for generating data sets with user-defined α values. We hope that researchers will use this code to better understand the limitations of α as an index of scale reliability. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9892839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}