{"title":"Dynamic measurement invariance cutoffs for two-group fit index differences.","authors":"Daniel McNeish","doi":"10.1037/met0000767","DOIUrl":null,"url":null,"abstract":"<p><p>Measurement invariance is investigated to ensure that a measurement scale functions similarly across different groups. A prevailing approach is to fit a series of multiple-group confirmatory factor models and then compare differences in fit indices of constrained and unconstrained models. Common recommendations are that a difference in comparative fit index ΔCFI above -.01 or a difference in the root-mean-square error of approximation ΔRMSEA less than .01 suggests evidence of invariance. In this article, we review the methodological literature that highlights that these widely used cutoffs do not generalize well. Specifically, the distributions of fit index differences expand or contract based on model and data characteristics, making any single cutoff unlikely to maintain desirable performance across a wide range of conditions. To address this, we propose a method called dynamic measurement invariance (DMI) cutoffs, which is an extension of dynamic fit index cutoffs originally devised to accommodate related issues in single-group models. DMI generalizes the procedure used in the seminal Cheung and Rensvold (2002) study by executing a simulation based on the researcher's specific model and data characteristics. DMI derives custom fit index difference cutoffs with optimal performance for the model being evaluated. The article explains the method and provides simulations and empirical examples to demonstrate its potential contribution, as well as ways in which it could be extended to expand its scope and utility. Open-source software is also provided to improve the accessibility of the method. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.8000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/met0000767","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Measurement invariance is investigated to ensure that a measurement scale functions similarly across different groups. A prevailing approach is to fit a series of multiple-group confirmatory factor models and then compare differences in fit indices of constrained and unconstrained models. Common recommendations are that a difference in comparative fit index ΔCFI above -.01 or a difference in the root-mean-square error of approximation ΔRMSEA less than .01 suggests evidence of invariance. In this article, we review the methodological literature that highlights that these widely used cutoffs do not generalize well. Specifically, the distributions of fit index differences expand or contract based on model and data characteristics, making any single cutoff unlikely to maintain desirable performance across a wide range of conditions. To address this, we propose a method called dynamic measurement invariance (DMI) cutoffs, which is an extension of dynamic fit index cutoffs originally devised to accommodate related issues in single-group models. DMI generalizes the procedure used in the seminal Cheung and Rensvold (2002) study by executing a simulation based on the researcher's specific model and data characteristics. DMI derives custom fit index difference cutoffs with optimal performance for the model being evaluated. The article explains the method and provides simulations and empirical examples to demonstrate its potential contribution, as well as ways in which it could be extended to expand its scope and utility. Open-source software is also provided to improve the accessibility of the method. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues. The audience is expected to be diverse and to include those who develop new procedures, those who are responsible for undergraduate and graduate training in design, measurement, and statistics, as well as those who employ those procedures in research.