{"title":"Testing for Equivalence: A Methodology for Computational Cognitive Modelling","authors":"T. Stewart, R. West","doi":"10.2478/v10229-011-0010-8","DOIUrl":null,"url":null,"abstract":"Testing for Equivalence: A Methodology for Computational Cognitive Modelling The equivalence test (Stewart and West, 2007; Stewart, 2007) is a statistical measure for evaluating the similarity between a model and the system being modelled. It is designed to avoid over-fitting and to generate an easily interpretable summary of the quality of a model. We apply the equivalence test to two tasks: Repeated Binary Choice (Erev et al., 2010) and Dynamic Stocks and Flows (Gonzalez and Dutt, 2007). In the first case, we find a broad range of statistically equivalent models (and win a prediction competition) while identifying particular aspects of the task that are not yet adequately captured. In the second case, we re-evaluate results from the Dynamic Stocks and Flows challenge, demonstrating how our method emphasizes the breadth of coverage of a model and how it can be used for comparing different models. We argue that the explanatory power of models hinges on numerical similarity to empirical data over a broad set of measures.","PeriodicalId":247142,"journal":{"name":"Journal of Artificial General Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Artificial General Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2478/v10229-011-0010-8","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
Testing for Equivalence: A Methodology for Computational Cognitive Modelling The equivalence test (Stewart and West, 2007; Stewart, 2007) is a statistical measure for evaluating the similarity between a model and the system being modelled. It is designed to avoid over-fitting and to generate an easily interpretable summary of the quality of a model. We apply the equivalence test to two tasks: Repeated Binary Choice (Erev et al., 2010) and Dynamic Stocks and Flows (Gonzalez and Dutt, 2007). In the first case, we find a broad range of statistically equivalent models (and win a prediction competition) while identifying particular aspects of the task that are not yet adequately captured. In the second case, we re-evaluate results from the Dynamic Stocks and Flows challenge, demonstrating how our method emphasizes the breadth of coverage of a model and how it can be used for comparing different models. We argue that the explanatory power of models hinges on numerical similarity to empirical data over a broad set of measures.
等效性测试:计算认知模型的一种方法等效性测试(Stewart and West, 2007;Stewart, 2007)是评估模型和被建模系统之间相似性的统计度量。它的设计是为了避免过度拟合,并生成一个易于解释的模型质量摘要。我们将等价检验应用于两个任务:重复二元选择(Erev et al., 2010)和动态库存和流量(Gonzalez and Dutt, 2007)。在第一种情况下,我们找到了广泛的统计等效模型(并赢得了预测竞赛),同时确定了尚未充分捕获的任务的特定方面。在第二种情况下,我们重新评估动态库存和流量挑战的结果,展示我们的方法如何强调模型覆盖的广度,以及如何使用它来比较不同的模型。我们认为,模型的解释力取决于数值相似性的经验数据在一套广泛的措施。