Ethan M McCormick, Patrick J Curran, Gregory R Hancock
{"title":"Latent growth factors as predictors of distal outcomes.","authors":"Ethan M McCormick, Patrick J Curran, Gregory R Hancock","doi":"10.1037/met0000642","DOIUrl":"https://doi.org/10.1037/met0000642","url":null,"abstract":"<p><p>A currently overlooked application of the latent curve model (LCM) is its use in assessing the consequences of development patterns of change-that is as a predictor of distal outcomes. However, there are additional complications for appropriately specifying and interpreting the distal outcome LCM. Here, we develop a general framework for understanding the sensitivity of the distal outcome LCM to the choice of time coding, focusing on the regressions of the distal outcome on the latent growth factors. Using artificial and real-data examples, we highlight the unexpected changes in the regression of the slope factor which stand in contrast to prior work on time coding effects, and develop a framework for estimating the distal outcome LCM at a point in the trajectory-known as the aperture-which maximizes the interpretability of the effects. We also outline a prioritization approach developed for assessing incremental validity to obtain consistently interpretable estimates of the effect of the slope. Throughout, we emphasize practical steps for understanding these changing predictive effects, including graphical approaches for assessing regions of significance similar to those used to probe interaction effects. We conclude by providing recommendations for applied research using these models and outline an agenda for future work in this area. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-06-01Epub Date: 2024-03-21DOI: 10.1037/met0000507
Harlan Campbell, Paul Gustafson
{"title":"The Bayes factor, HDI-ROPE, and frequentist equivalence tests can all be reverse engineered-Almost exactly-From one another: Reply to Linde et al. (2021).","authors":"Harlan Campbell, Paul Gustafson","doi":"10.1037/met0000507","DOIUrl":"10.1037/met0000507","url":null,"abstract":"<p><p>Following an extensive simulation study comparing the operating characteristics of three different procedures used for establishing equivalence (the frequentist \"TOST,\" the Bayesian \"HDI-ROPE,\" and the Bayes factor interval null procedure), Linde et al. (2021) conclude with the recommendation that \"researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence\" (p. 1). We redo the simulation study of Linde et al. (2021) in its entirety but with the different procedures calibrated to have the same predetermined maximum Type I error rate. Our results suggest that, when calibrated in this way, the Bayes factor, HDI-ROPE, and frequentist equivalence tests all have similar-almost exactly-Type II error rates. In general any advocating for frequentist testing as better or worse than Bayesian testing in terms of empirical findings seems dubious at best. If one decides on which underlying principle to subscribe to in tackling a given problem, then the method follows naturally. Bearing in mind that each procedure can be reverse-engineered from the others (at least approximately), trying to use empirical performance to argue for 1 approach over another seems like tilting at windmills. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"613-623"},"PeriodicalIF":7.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140176154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-06-01Epub Date: 2023-02-16DOI: 10.1037/met0000540
Friedrich M Götz, Rakoen Maertens, Sahil Loomba, Sander van der Linden
{"title":"Let the algorithm speak: How to use neural networks for automatic item generation in psychological scale development.","authors":"Friedrich M Götz, Rakoen Maertens, Sahil Loomba, Sander van der Linden","doi":"10.1037/met0000540","DOIUrl":"10.1037/met0000540","url":null,"abstract":"<p><p>Measurement is at the heart of scientific research. As many-perhaps most-psychological constructs cannot be directly observed, there is a steady demand for reliable self-report scales to assess latent constructs. However, scale development is a tedious process that requires researchers to produce good items in large quantities. In this tutorial, we introduce, explain, and apply the Psychometric Item Generator (PIG), an open-source, free-to-use, self-sufficient natural language processing algorithm that produces large-scale, human-like, customized text output within a few mouse clicks. The PIG is based on the GPT-2, a powerful generative language model, and runs on Google Colaboratory-an interactive virtual notebook environment that executes code on state-of-the-art virtual machines at no cost. Across two demonstrations and a preregistered five-pronged empirical validation with two Canadian samples (<i>N</i><sub>Sample 1</sub> = 501, <i>N</i><sub>Sample 2</sub> = 773), we show that the PIG is equally well-suited to generate large pools of face-valid items for novel constructs (i.e., wanderlust) and create parsimonious short scales of existing constructs (i.e., Big Five personality traits) that yield strong performances when tested in the wild and benchmarked against current gold standards for assessment. The PIG does not require any prior coding skills or access to computational resources and can easily be tailored to any desired context by simply switching out short linguistic prompts in a single line of code. In short, we present an effective, novel machine learning solution to an old psychological challenge. As such, the PIG will not require you to learn a new language-but instead, speak yours. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"494-518"},"PeriodicalIF":7.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10787831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-06-01Epub Date: 2023-04-13DOI: 10.1037/met0000565
Xiao Liu, Fang Liu, Laura Miller-Graff, Kathryn H Howell, Lijuan Wang
{"title":"Causal inference for treatment effects in partially nested designs.","authors":"Xiao Liu, Fang Liu, Laura Miller-Graff, Kathryn H Howell, Lijuan Wang","doi":"10.1037/met0000565","DOIUrl":"10.1037/met0000565","url":null,"abstract":"<p><p>artially nested designs (PNDs) are common in intervention studies in psychology and other social sciences. With this design, participants are assigned to treatment and control groups on an individual basis, but clustering occurs in some but not all groups (e.g., the treatment group). In recent years, there has been substantial development of methods for analyzing data from PNDs. However, little research has been done on causal inference for PNDs, especially for PNDs with nonrandomized treatment assignments. To reduce the research gap, in the current study, we used the expanded potential outcomes framework to define and identify the average causal treatment effects in PNDs. Based on the identification results, we formulated the outcome models that could produce treatment effect estimates with causal interpretation and evaluated how alternative model specifications affect the causal interpretation. We also developed an inverse propensity weighted (IPW) estimation approach and proposed a sandwich-type standard error estimator for the IPW-based estimate. Our simulation studies demonstrated that both the outcome modeling and the IPW methods specified following the identification results can yield satisfactory estimates and inferences of the average causal treatment effects. We applied the proposed approaches to data from a real-life pilot study of the Pregnant Moms' Empowerment Program for illustration. The current study provides guidance and insights on causal inference for PNDs and adds to researchers' toolbox of treatment effect estimation with PNDs. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"457-479"},"PeriodicalIF":7.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9737459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Causal definitions versus casual estimation: Reply to Valente et al. (2022).","authors":"Holger Brandt","doi":"10.1037/met0000544","DOIUrl":"10.1037/met0000544","url":null,"abstract":"<p><p>In this response to Valente et al. (2022), I am discussing the plausibility and applicability of the proposed mediation model and its causal effects estimation for single case experimental designs (SCEDs). I will focus on the underlying assumptions that the authors use to identify the causal effects. These assumptions include the particularly problematic assumption of sequential ignorability or no-unmeasured confounders. First, I will discuss the plausibility of the assumption in general and then particularly for SCEDs by providing an analytic argument and a reanalysis of the empirical example in Valente et al. (2022). Second, I will provide a simulation that reproduces the design by Valente et al. (2022) with the exception that, for a more realistic depiction of empirical data, an unmeasured confounder affects the mediator and outcome variables. The results of this simulation study indicate that even minor violations will lead to Type I error rates up to 100% and coverage rates as low as 0% for the defined causal direct and indirect effects. Third, using historical data on the effect of birth control on stork population and birth rates, I will show that mediation models like the proposed method can lead to surprising artifacts. These artifacts can hardly be identified with statistically means including methods such as sensitivity analyses. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"29 3","pages":"589-602"},"PeriodicalIF":7.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142293955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2024-06-01Epub Date: 2023-01-09DOI: 10.1037/met0000537
Victoria Savalei, Jordan C Brace, Rachel T Fouladi
{"title":"We need to change how we compute RMSEA for nested model comparisons in structural equation modeling.","authors":"Victoria Savalei, Jordan C Brace, Rachel T Fouladi","doi":"10.1037/met0000537","DOIUrl":"10.1037/met0000537","url":null,"abstract":"<p><p>Comparison of nested models is common in applications of structural equation modeling (SEM). When two models are nested, model comparison can be done via a chi-square difference test or by comparing indices of approximate fit. The advantage of fit indices is that they permit some amount of misspecification in the additional constraints imposed on the model, which is a more realistic scenario. The most popular index of approximate fit is the root mean square error of approximation (RMSEA). In this article, we argue that the dominant way of comparing RMSEA values for two nested models, which is simply taking their difference, is problematic and will often mask misfit, particularly in model comparisons with large initial degrees of freedom. We instead advocate computing the RMSEA associated with the chi-square difference test, which we call RMSEA<sub>D</sub>. We are not the first to propose this index, and we review numerous methodological articles that have suggested it. Nonetheless, these articles appear to have had little impact on actual practice. The modification of current practice that we call for may be particularly needed in the context of measurement invariance assessment. We illustrate the difference between the current approach and our advocated approach on three examples, where two involve multiple-group and longitudinal measurement invariance assessment and the third involves comparisons of models with different numbers of factors. We conclude with a discussion of recommendations and future research directions. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"480-493"},"PeriodicalIF":7.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10495377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximilian Linde, Jorge N Tendeiro, Eric-Jan Wagenmakers, Don van Ravenzwaaij
{"title":"Practical implications of equating equivalence tests: Reply to Campbell and Gustafson (2022).","authors":"Maximilian Linde, Jorge N Tendeiro, Eric-Jan Wagenmakers, Don van Ravenzwaaij","doi":"10.1037/met0000549","DOIUrl":"10.1037/met0000549","url":null,"abstract":"<p><p>Linde et al. (2021) compared the \"two one-sided tests\" the \"highest density interval-region of practical equivalence\", and the \"interval Bayes factor\" approaches to establishing equivalence in terms of power and Type I error rate using typical decision thresholds. They found that the interval Bayes factor approach exhibited a higher power but also a higher Type I error rate than the other approaches. In response, Campbell and Gustafson (2022) showed that the performances of the three approaches can approximate one another when they are calibrated to have the same Type I error rate. In this article, we argue that these results have little bearing on how these approaches are used in practice; a concrete example is used to highlight this important point. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"29 3","pages":"603-605"},"PeriodicalIF":7.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142293957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An overview of alternative formats to the Likert format: A comment on Wilson et al. (2022).","authors":"Xijuan Zhang, Victoria Savalei","doi":"10.1037/met0000631","DOIUrl":"10.1037/met0000631","url":null,"abstract":"<p><p>Wilson et al. (2022) compared the Likert response format to an alternative format, which they called the Guttman response format. Using a Rasch modeling approach, they found that the Guttman response format had better properties relative to the Likert response format. We agree with their analyses and conclusions. However, they have failed to mention many existing articles that have sought to overcome the disadvantages of the Likert format through the use of an alternative format. For example, the so-called \"Guttman response format\" is essentially the same as the Expanded format, which was proposed by Zhang and Savalei (2016) as a way to overcome the disadvantages of the Likert format. Similar alternative formats have been investigated since the 1960s. In this short response article, we provide a review of several alternative formats, explaining in detail the key characteristics of all the alternative formats that are designed to overcome the problems with the Likert format. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"29 3","pages":"606-612"},"PeriodicalIF":7.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142293954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}