James Soland, Veronica Cole, Stephen Tavares, Qilin Zhang
{"title":"生长混合模型结果对评分决策高度敏感的证据。","authors":"James Soland, Veronica Cole, Stephen Tavares, Qilin Zhang","doi":"10.1080/00273171.2024.2444955","DOIUrl":null,"url":null,"abstract":"<p><p>Interest in identifying latent growth profiles to support the psychological and social-emotional development of individuals has translated into the widespread use of growth mixture models (GMMs). In most cases, GMMs are based on scores from item responses collected using survey scales or other measures. Research already shows that GMMs can be sensitive to departures from ideal modeling conditions and that growth model results outside of GMMs are sensitive to decisions about how item responses are scored, but the impact of scoring decisions on GMMs has never been investigated. We start to close that gap in the literature with the current study. Through empirical and Monte Carlo studies, we show that GMM results-including convergence, class enumeration, and latent growth trajectories within class-are extremely sensitive to seemingly arcane measurement decisions. Further, our results make clear that, because GMM latent classes are not known a priori, measurement models used to produce scores for use in GMMs are, almost by definition, misspecified because they cannot account for group membership. Misspecification of the measurement model then, in turn, biases GMM results. Practical implications of these results are discussed. Our findings raise serious concerns that many results in the current GMM literature may be driven, in part or whole, by measurement artifacts rather than substantive differences in developmental trends.</p>","PeriodicalId":53155,"journal":{"name":"Multivariate Behavioral Research","volume":" ","pages":"1-22"},"PeriodicalIF":5.3000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evidence That Growth Mixture Model Results Are Highly Sensitive to Scoring Decisions.\",\"authors\":\"James Soland, Veronica Cole, Stephen Tavares, Qilin Zhang\",\"doi\":\"10.1080/00273171.2024.2444955\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Interest in identifying latent growth profiles to support the psychological and social-emotional development of individuals has translated into the widespread use of growth mixture models (GMMs). In most cases, GMMs are based on scores from item responses collected using survey scales or other measures. Research already shows that GMMs can be sensitive to departures from ideal modeling conditions and that growth model results outside of GMMs are sensitive to decisions about how item responses are scored, but the impact of scoring decisions on GMMs has never been investigated. We start to close that gap in the literature with the current study. Through empirical and Monte Carlo studies, we show that GMM results-including convergence, class enumeration, and latent growth trajectories within class-are extremely sensitive to seemingly arcane measurement decisions. Further, our results make clear that, because GMM latent classes are not known a priori, measurement models used to produce scores for use in GMMs are, almost by definition, misspecified because they cannot account for group membership. Misspecification of the measurement model then, in turn, biases GMM results. Practical implications of these results are discussed. Our findings raise serious concerns that many results in the current GMM literature may be driven, in part or whole, by measurement artifacts rather than substantive differences in developmental trends.</p>\",\"PeriodicalId\":53155,\"journal\":{\"name\":\"Multivariate Behavioral Research\",\"volume\":\" \",\"pages\":\"1-22\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multivariate Behavioral Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1080/00273171.2024.2444955\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multivariate Behavioral Research","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/00273171.2024.2444955","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Evidence That Growth Mixture Model Results Are Highly Sensitive to Scoring Decisions.
Interest in identifying latent growth profiles to support the psychological and social-emotional development of individuals has translated into the widespread use of growth mixture models (GMMs). In most cases, GMMs are based on scores from item responses collected using survey scales or other measures. Research already shows that GMMs can be sensitive to departures from ideal modeling conditions and that growth model results outside of GMMs are sensitive to decisions about how item responses are scored, but the impact of scoring decisions on GMMs has never been investigated. We start to close that gap in the literature with the current study. Through empirical and Monte Carlo studies, we show that GMM results-including convergence, class enumeration, and latent growth trajectories within class-are extremely sensitive to seemingly arcane measurement decisions. Further, our results make clear that, because GMM latent classes are not known a priori, measurement models used to produce scores for use in GMMs are, almost by definition, misspecified because they cannot account for group membership. Misspecification of the measurement model then, in turn, biases GMM results. Practical implications of these results are discussed. Our findings raise serious concerns that many results in the current GMM literature may be driven, in part or whole, by measurement artifacts rather than substantive differences in developmental trends.
期刊介绍:
Multivariate Behavioral Research (MBR) publishes a variety of substantive, methodological, and theoretical articles in all areas of the social and behavioral sciences. Most MBR articles fall into one of two categories. Substantive articles report on applications of sophisticated multivariate research methods to study topics of substantive interest in personality, health, intelligence, industrial/organizational, and other behavioral science areas. Methodological articles present and/or evaluate new developments in multivariate methods, or address methodological issues in current research. We also encourage submission of integrative articles related to pedagogy involving multivariate research methods, and to historical treatments of interest and relevance to multivariate research methods.