Psychological methodsPub Date : 2024-02-01Epub Date: 2022-05-19DOI: 10.1037/met0000495
Steffen Grønneberg, Njål Foldnes
{"title":"Factor analyzing ordinal items requires substantive knowledge of response marginals.","authors":"Steffen Grønneberg, Njål Foldnes","doi":"10.1037/met0000495","DOIUrl":"10.1037/met0000495","url":null,"abstract":"<p><p>In the social sciences, measurement scales often consist of ordinal items and are commonly analyzed using factor analysis. Either data are treated as continuous, or a discretization framework is imposed in order to take the ordinal scale properly into account. Correlational analysis is central in both approaches, and we review recent theory on correlations obtained from ordinal data. To ensure appropriate estimation, the item distributions prior to discretization should be (approximately) known, or the thresholds should be known to be equally spaced. We refer to such knowledge as substantive because it may not be extracted from the data, but must be rooted in expert knowledge about the data-generating process. An illustrative case is presented where absence of substantive knowledge of the item distributions inevitably leads the analyst to conclude that a truly two-dimensional case is perfectly one-dimensional. Additional studies probe the extent to which violation of the standard assumption of underlying normality leads to bias in correlations and factor models. As a remedy, we propose an adjusted polychoric estimator for ordinal factor analysis that takes substantive knowledge into account. Also, we demonstrate how to use the adjusted estimator in sensitivity analysis when the continuous item distributions are known only approximately. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"65-87"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9171744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-02-01Epub Date: 2022-11-03DOI: 10.1037/met0000528
Jonas M B Haslbeck, Riet van Bork
{"title":"Estimating the number of factors in exploratory factor analysis via out-of-sample prediction errors.","authors":"Jonas M B Haslbeck, Riet van Bork","doi":"10.1037/met0000528","DOIUrl":"10.1037/met0000528","url":null,"abstract":"<p><p>Exploratory factor analysis (EFA) is one of the most popular statistical models in psychological science. A key problem in EFA is to estimate the number of factors. In this article, we present a new method for estimating the number of factors based on minimizing the out-of-sample prediction error of candidate factor models. We show in an extensive simulation study that our method slightly outperforms existing methods, including parallel analysis, Bayesian information criterion (BIC), Akaike information criterion (AIC), root mean squared error of approximation (RMSEA), and exploratory graph analysis. In addition, we show that, among the best performing methods, our method is the one that is most robust across different specifications of the true factor model. We provide an implementation of our method in the R-package <i>fspe</i>. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"48-64"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10590357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-02-01Epub Date: 2023-11-13DOI: 10.1037/met0000596
Harlan Campbell
{"title":"Equivalence testing for linear regression.","authors":"Harlan Campbell","doi":"10.1037/met0000596","DOIUrl":"10.1037/met0000596","url":null,"abstract":"<p><p>We introduce equivalence testing procedures for linear regression analyses. Such tests can be very useful for confirming the lack of a meaningful association between a continuous outcome and a continuous or binary predictor. Specifically, we propose an equivalence test for unstandardized regression coefficients and an equivalence test for semipartial correlation coefficients. We review how to define valid hypotheses, how to calculate <i>p</i> values, and how these tests compare to an alternative Bayesian approach with applications to examples in the literature. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"88-98"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92156266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-02-01Epub Date: 2022-05-12DOI: 10.1037/met0000484
Loes Crielaard, Jeroen F Uleman, Bas D L Châtel, Sacha Epskamp, Peter M A Sloot, Rick Quax
{"title":"Refining the causal loop diagram: A tutorial for maximizing the contribution of domain expertise in computational system dynamics modeling.","authors":"Loes Crielaard, Jeroen F Uleman, Bas D L Châtel, Sacha Epskamp, Peter M A Sloot, Rick Quax","doi":"10.1037/met0000484","DOIUrl":"10.1037/met0000484","url":null,"abstract":"<p><p>Complexity science and systems thinking are increasingly recognized as relevant paradigms for studying systems where biology, psychology, and socioenvironmental factors interact. The application of systems thinking, however, often stops at developing a conceptual model that visualizes the mapping of causal links within a system, e.g., a causal loop diagram (CLD). While this is an important contribution in itself, it is imperative to subsequently formulate a computable version of a CLD in order to interpret the dynamics of the modeled system and simulate \"what if\" scenarios. We propose to realize this by deriving knowledge from experts' mental models in biopsychosocial domains. This article first describes the steps required for capturing expert knowledge in a CLD such that it may result in a computational system dynamics model (SDM). For this purpose, we introduce several annotations to the CLD that facilitate this intended conversion. This annotated CLD (aCLD) includes sources of evidence, intermediary variables, functional forms of causal links, and the distinction between uncertain and known-to-be-absent causal links. We propose an algorithm for developing an aCLD that includes these annotations. We then describe how to formulate an SDM based on the aCLD. The described steps for this conversion help identify, quantify, and potentially reduce sources of uncertainty and obtain confidence in the results of the SDM's simulations. We utilize a running example that illustrates each step of this conversion process. The systematic approach described in this article facilitates and advances the application of computational science methods to biopsychosocial systems. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"169-201"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10011305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-02-01Epub Date: 2023-08-10DOI: 10.1037/met0000551
Anja F Ernst, Marieke E Timmerman, Feng Ji, Bertus F Jeronimus, Casper J Albers
{"title":"Mixture multilevel vector-autoregressive modeling.","authors":"Anja F Ernst, Marieke E Timmerman, Feng Ji, Bertus F Jeronimus, Casper J Albers","doi":"10.1037/met0000551","DOIUrl":"10.1037/met0000551","url":null,"abstract":"<p><p>With the rising popularity of intensive longitudinal research, the modeling techniques for such data are increasingly focused on individual differences. Here we present mixture multilevel vector-autoregressive modeling, which extends multilevel vector-autoregressive modeling by including a mixture, to identify individuals with similar traits and dynamic processes. This exploratory model identifies mixture components, where each component refers to individuals with similarities in means (expressing traits), autoregressions, and cross-regressions (expressing dynamics), while allowing for some interindividual differences in these attributes. Key issues in modeling are discussed, where the issue of centering predictors is examined in a small simulation study. The proposed model is validated in a simulation study and used to analyze the affective data from the COGITO study. These data consist of samples for two different age groups of over 100 individuals each who were measured for about 100 days. We demonstrate the advantage of exploratory identifying mixture components by analyzing these heterogeneous samples jointly. The model identifies three distinct components, and we provide an interpretation for each component motivated by developmental psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"137-154"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9958393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-02-01Epub Date: 2022-04-14DOI: 10.1037/met0000492
Martin Schnuerch, Daniel W Heck, Edgar Erdfelder
{"title":"Waldian t tests: Sequential Bayesian t tests with controlled error probabilities.","authors":"Martin Schnuerch, Daniel W Heck, Edgar Erdfelder","doi":"10.1037/met0000492","DOIUrl":"10.1037/met0000492","url":null,"abstract":"<p><p>Bayesian <i>t</i> tests have become increasingly popular alternatives to null-hypothesis significance testing (NHST) in psychological research. In contrast to NHST, they allow for the quantification of evidence in favor of the null hypothesis and for optional stopping. A major drawback of Bayesian <i>t</i> tests, however, is that error probabilities of statistical decisions remain uncontrolled. Previous approaches in the literature to remedy this problem require time-consuming simulations to calibrate decision thresholds. In this article, we propose a sequential probability ratio test that combines Bayesian <i>t</i> tests with simple decision criteria developed by Abraham Wald in 1947. We discuss this sequential procedure, which we call Waldian <i>t</i> test, in the context of three recently proposed specifications of Bayesian <i>t</i> tests. Waldian <i>t</i> tests preserve the key idea of Bayesian t tests by assuming a distribution for the effect size under the alternative hypothesis. At the same time, they control expected frequentist error probabilities, with the nominal Type I and Type II error probabilities serving as upper bounds to the actual expected error rates under the specified statistical models. Thus, Waldian <i>t</i> tests are fully justified from both a Bayesian and a frequentist point of view. We highlight the relationship between Bayesian and frequentist error probabilities and critically discuss the implications of conventional stopping criteria for sequential Bayesian <i>t</i> tests. Finally, we provide a user-friendly web application that implements the proposed procedure for interested researchers. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"99-116"},"PeriodicalIF":7.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139906304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data aggregation can lead to biased inferences in Bayesian linear mixed models and Bayesian analysis of variance.","authors":"Daniel J Schad, Bruno Nicenboim, Shravan Vasishth","doi":"10.1037/met0000621","DOIUrl":"https://doi.org/10.1037/met0000621","url":null,"abstract":"<p><p>Bayesian linear mixed-effects models (LMMs) and Bayesian analysis of variance (ANOVA) are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors use data aggregation at the by-subject level and estimate Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference applied to several example experimental designs to demonstrate that, as with frequentist analysis, such null hypothesis tests on aggregated data can be problematic in Bayesian analysis. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Running Bayesian ANOVA on aggregated data can-if the sphericity assumption is violated-likewise lead to biased Bayes factor results. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item slope variance is present but ignored in the analysis. These problems can be circumvented or reduced by running Bayesian LMMs on nonaggregated data such as on individual trials, and by explicitly modeling the full random effects structure. Reproducible code is available from https://osf.io/mjf47/. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139564771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esther Maassen, E Damiano D'Urso, Marcel A L M van Assen, Michèle B Nuijten, Kim De Roover, Jelte M Wicherts
{"title":"The dire disregard of measurement invariance testing in psychological science.","authors":"Esther Maassen, E Damiano D'Urso, Marcel A L M van Assen, Michèle B Nuijten, Kim De Roover, Jelte M Wicherts","doi":"10.1037/met0000624","DOIUrl":"10.1037/met0000624","url":null,"abstract":"<p><p>Self-report scales are widely used in psychology to compare means in latent constructs across groups, experimental conditions, or time points. However, for these comparisons to be meaningful and unbiased, the scales must demonstrate measurement invariance (MI) across compared time points or (experimental) groups. MI testing determines whether the latent constructs are measured equivalently across groups or time, which is essential for meaningful comparisons. We conducted a systematic review of 426 psychology articles with openly available data, to (a) examine common practices in conducting and reporting of MI testing, (b) assess whether we could reproduce the reported MI results, and (c) conduct MI tests for the comparisons that enabled sufficiently powerful MI testing. We identified 96 articles that contained a total of 929 comparisons. Results showed that only 4% of the 929 comparisons underwent MI testing, and the tests were generally poorly reported. None of the reported MI tests were reproducible, and only 26% of the 174 newly performed MI tests reached sufficient (scalar) invariance, with MI failing completely in 58% of tests. Exploratory analyses suggested that in nearly half of the comparisons where configural invariance was rejected, the number of factors differed between groups. These results indicate that MI tests are rarely conducted and poorly reported in psychological studies. We observed frequent violations of MI, suggesting that reported differences between (experimental) groups may not be solely attributed to group differences in the latent constructs. We offer recommendations aimed at improving reporting and computational reproducibility practices in psychology. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139037948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scoring assessments in multisite randomized control trials: Examining the sensitivity of treatment effect estimates to measurement choices.","authors":"Megan Kuhfeld, James Soland","doi":"10.1037/met0000633","DOIUrl":"https://doi.org/10.1037/met0000633","url":null,"abstract":"<p><p>While a great deal of thought, planning, and money goes into the design of multisite randomized control trials (RCTs) that are used to evaluate the effectiveness of interventions in fields like education and psychology, relatively little thought is often paid to the measurement choices made in such evaluations. In this study, we conduct a series of simulation studies that consider a wide range of options for producing scores from multiple administration of assessments in the context of multisite RCTs. The scoring models considered range from the simple (sum scores) to highly complex (multilevel two-tier item response theory [IRT] models with latent regression). We find that the true treatment effect is attenuated when sum scores or scores from IRT models that do not account for treatment assignment are used. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138831227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supplemental Material for Data Aggregation Can Lead to Biased Inferences in Bayesian Linear Mixed Models and Bayesian Analysis of Variance","authors":"","doi":"10.1037/met0000621.supp","DOIUrl":"https://doi.org/10.1037/met0000621.supp","url":null,"abstract":"","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"59 34","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138949129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}