{"title":"Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting.","authors":"Katie Steele, Charlotte Werndl","doi":"10.1093/bjps/axw024","DOIUrl":null,"url":null,"abstract":"<p><p>This article argues that common intuitions regarding (a) the specialness of 'use-novel' data for confirmation and (b) that this specialness implies the 'no-double-counting rule', which says that data used in 'constructing' (calibrating) a model cannot also play a role in confirming the model's predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. <b>1</b> <i>Introduction</i><b>2</b> <i>A Climate Case Study</i><b>3</b> <i>The Bayesian Method vis-à-vis Intuitions</i><b>4</b> <i>Classical Tests vis-à-vis Intuitions</i><b>5</b> <i>Classical Model-Selection Methods vis-à-vis Intuitions</i> <b>5.1</b> <i>Introducing classical model-selection methods</i> <b>5.2</b> <i>Two cases</i><b>6</b> <i>Re-examining Our Case Study</i><b>7</b> <i>Conclusion</i>.</p>","PeriodicalId":509962,"journal":{"name":"The British Journal for the Philosophy of Science","volume":"69 2","pages":"351-375"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/bjps/axw024","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The British Journal for the Philosophy of Science","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1093/bjps/axw024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2016/8/30 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
This article argues that common intuitions regarding (a) the specialness of 'use-novel' data for confirmation and (b) that this specialness implies the 'no-double-counting rule', which says that data used in 'constructing' (calibrating) a model cannot also play a role in confirming the model's predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1Introduction2A Climate Case Study3The Bayesian Method vis-à-vis Intuitions4Classical Tests vis-à-vis Intuitions5Classical Model-Selection Methods vis-à-vis Intuitions5.1Introducing classical model-selection methods5.2Two cases6Re-examining Our Case Study7Conclusion.