{"title":"Evaluating the performance of existing and novel equivalence tests for fit indices in structural equation modelling","authors":"Nataly Beribisky, Robert A. Cribbie","doi":"10.1111/bmsp.12317","DOIUrl":null,"url":null,"abstract":"<p>It has been suggested that equivalence testing (otherwise known as negligible effect testing) should be used to evaluate model fit within structural equation modelling (SEM). In this study, we propose novel variations of equivalence tests based on the popular root mean squared error of approximation and comparative fit index fit indices. Using Monte Carlo simulations, we compare the performance of these novel tests to other existing equivalence testing-based fit indices in SEM, as well as to other methods commonly used to evaluate model fit. Results indicate that equivalence tests in SEM have good Type I error control and display considerable power for detecting well-fitting models in medium to large sample sizes. At small sample sizes, relative to traditional fit indices, equivalence tests limit the chance of supporting a poorly fitting model. We also present an illustrative example to demonstrate how equivalence tests may be incorporated in model fit reporting. Equivalence tests in SEM also have unique interpretational advantages compared to other methods of model fit evaluation. We recommend that equivalence tests be utilized in conjunction with descriptive fit indices to provide more evidence when evaluating model fit.</p>","PeriodicalId":55322,"journal":{"name":"British Journal of Mathematical & Statistical Psychology","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bmsp.12317","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Mathematical & Statistical Psychology","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/bmsp.12317","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
It has been suggested that equivalence testing (otherwise known as negligible effect testing) should be used to evaluate model fit within structural equation modelling (SEM). In this study, we propose novel variations of equivalence tests based on the popular root mean squared error of approximation and comparative fit index fit indices. Using Monte Carlo simulations, we compare the performance of these novel tests to other existing equivalence testing-based fit indices in SEM, as well as to other methods commonly used to evaluate model fit. Results indicate that equivalence tests in SEM have good Type I error control and display considerable power for detecting well-fitting models in medium to large sample sizes. At small sample sizes, relative to traditional fit indices, equivalence tests limit the chance of supporting a poorly fitting model. We also present an illustrative example to demonstrate how equivalence tests may be incorporated in model fit reporting. Equivalence tests in SEM also have unique interpretational advantages compared to other methods of model fit evaluation. We recommend that equivalence tests be utilized in conjunction with descriptive fit indices to provide more evidence when evaluating model fit.
有人建议,等效检验(又称可忽略效应检验)应被用于评估结构方程建模(SEM)中的模型拟合度。在本研究中,我们提出了基于流行的均方根近似误差和比较拟合指数拟合指数的新型等效检验变体。通过蒙特卡罗模拟,我们将这些新型检验的性能与 SEM 中其他现有的基于等效检验的拟合指数以及其他常用于评估模型拟合度的方法进行了比较。结果表明,SEM 中的等效检验具有良好的 I 类误差控制能力,在中到大型样本量中检测拟合良好的模型时显示出相当大的威力。在小样本量情况下,相对于传统的拟合指数,等效检验限制了支持拟合不良模型的机会。我们还将举例说明如何将等效检验纳入模型拟合报告。与其他模型拟合度评估方法相比,SEM 中的等效检验还具有独特的解释优势。我们建议将等效检验与描述性拟合指数结合使用,以便在评估模型拟合度时提供更多证据。
期刊介绍:
The British Journal of Mathematical and Statistical Psychology publishes articles relating to areas of psychology which have a greater mathematical or statistical aspect of their argument than is usually acceptable to other journals including:
• mathematical psychology
• statistics
• psychometrics
• decision making
• psychophysics
• classification
• relevant areas of mathematics, computing and computer software
These include articles that address substantitive psychological issues or that develop and extend techniques useful to psychologists. New models for psychological processes, new approaches to existing data, critiques of existing models and improved algorithms for estimating the parameters of a model are examples of articles which may be favoured.