{"title":"重新思考测量不变性的因果关系","authors":"Julia M. Rohrer , Borysław Paulewicz","doi":"10.1016/j.cresp.2025.100241","DOIUrl":null,"url":null,"abstract":"<div><div>Measurement invariance is often touted as a necessary statistical prerequisite for group comparisons. Typically, when there is evidence against measurement invariance, the analysis ends. Here, we introduce readers to an alternative perspective on measurement invariance that shifts the focus from statistical procedures to causality. From that angle, violations of measurement invariance imply that there are potentially interesting differences in the measurement process between the groups, which could warrant explanations in their own right. We illustrate this with hypothetical examples of substantively meaningful violations of metric, scalar, and residual invariance. At the same time, standard procedures to test for measurement invariance rest on strong causal assumptions about the data-generating process that researchers may often be unwilling to endorse in other contexts. We point out two very different ways forward. First, for researchers who want to commit to latent factor models, violations of measurement invariance can be followed up with investigations into <em>why</em> those violations occur, turning them from a dead end into new research questions. Second, for researchers who feel more ambivalent about latent factor models, alternatives may be considered, and group differences on sum scores and item scores may be reported anyway as interesting descriptive findings—but they should be followed up with discussions of various explanations that take into account their plausibility.</div></div>","PeriodicalId":72748,"journal":{"name":"Current research in ecological and social psychology","volume":"9 ","pages":"Article 100241"},"PeriodicalIF":2.2000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rethinking measurement invariance causally\",\"authors\":\"Julia M. Rohrer , Borysław Paulewicz\",\"doi\":\"10.1016/j.cresp.2025.100241\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Measurement invariance is often touted as a necessary statistical prerequisite for group comparisons. Typically, when there is evidence against measurement invariance, the analysis ends. Here, we introduce readers to an alternative perspective on measurement invariance that shifts the focus from statistical procedures to causality. From that angle, violations of measurement invariance imply that there are potentially interesting differences in the measurement process between the groups, which could warrant explanations in their own right. We illustrate this with hypothetical examples of substantively meaningful violations of metric, scalar, and residual invariance. At the same time, standard procedures to test for measurement invariance rest on strong causal assumptions about the data-generating process that researchers may often be unwilling to endorse in other contexts. We point out two very different ways forward. First, for researchers who want to commit to latent factor models, violations of measurement invariance can be followed up with investigations into <em>why</em> those violations occur, turning them from a dead end into new research questions. Second, for researchers who feel more ambivalent about latent factor models, alternatives may be considered, and group differences on sum scores and item scores may be reported anyway as interesting descriptive findings—but they should be followed up with discussions of various explanations that take into account their plausibility.</div></div>\",\"PeriodicalId\":72748,\"journal\":{\"name\":\"Current research in ecological and social psychology\",\"volume\":\"9 \",\"pages\":\"Article 100241\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Current research in ecological and social psychology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666622725000280\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current research in ecological and social psychology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666622725000280","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Measurement invariance is often touted as a necessary statistical prerequisite for group comparisons. Typically, when there is evidence against measurement invariance, the analysis ends. Here, we introduce readers to an alternative perspective on measurement invariance that shifts the focus from statistical procedures to causality. From that angle, violations of measurement invariance imply that there are potentially interesting differences in the measurement process between the groups, which could warrant explanations in their own right. We illustrate this with hypothetical examples of substantively meaningful violations of metric, scalar, and residual invariance. At the same time, standard procedures to test for measurement invariance rest on strong causal assumptions about the data-generating process that researchers may often be unwilling to endorse in other contexts. We point out two very different ways forward. First, for researchers who want to commit to latent factor models, violations of measurement invariance can be followed up with investigations into why those violations occur, turning them from a dead end into new research questions. Second, for researchers who feel more ambivalent about latent factor models, alternatives may be considered, and group differences on sum scores and item scores may be reported anyway as interesting descriptive findings—but they should be followed up with discussions of various explanations that take into account their plausibility.