{"title":"学习条件证据","authors":"J. Sprenger, S. Hartmann","doi":"10.1093/oso/9780199672110.003.0004","DOIUrl":null,"url":null,"abstract":"Learning indicative conditionals and learning relative frequencies have one thing in common: they are examples of conditional evidence, that is, evidence that includes a suppositional element. Standard Bayesian theory does not describe how such evidence affects rational degrees of belief, and natural solutions run into major problems. We propose that conditional evidence is best modeled by a combination of two strategies: First, by generalizing Bayesian Conditionalization to minimizing an appropriate divergence between prior and posterior probability distribution. Second, by representing the relevant causal relations and the implied conditional independence relations in a Bayesian network that constrains both prior and posterior. We show that this approach solves several well-known puzzles about learning conditional evidence (e.g., the notorious Judy Benjamin problem) and that learning an indicative conditional can often be described adequately by conditionalizing on the associated material conditional.","PeriodicalId":140328,"journal":{"name":"Bayesian Philosophy of Science","volume":"81 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning Conditional Evidence\",\"authors\":\"J. Sprenger, S. Hartmann\",\"doi\":\"10.1093/oso/9780199672110.003.0004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Learning indicative conditionals and learning relative frequencies have one thing in common: they are examples of conditional evidence, that is, evidence that includes a suppositional element. Standard Bayesian theory does not describe how such evidence affects rational degrees of belief, and natural solutions run into major problems. We propose that conditional evidence is best modeled by a combination of two strategies: First, by generalizing Bayesian Conditionalization to minimizing an appropriate divergence between prior and posterior probability distribution. Second, by representing the relevant causal relations and the implied conditional independence relations in a Bayesian network that constrains both prior and posterior. We show that this approach solves several well-known puzzles about learning conditional evidence (e.g., the notorious Judy Benjamin problem) and that learning an indicative conditional can often be described adequately by conditionalizing on the associated material conditional.\",\"PeriodicalId\":140328,\"journal\":{\"name\":\"Bayesian Philosophy of Science\",\"volume\":\"81 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bayesian Philosophy of Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/oso/9780199672110.003.0004\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bayesian Philosophy of Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/oso/9780199672110.003.0004","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning indicative conditionals and learning relative frequencies have one thing in common: they are examples of conditional evidence, that is, evidence that includes a suppositional element. Standard Bayesian theory does not describe how such evidence affects rational degrees of belief, and natural solutions run into major problems. We propose that conditional evidence is best modeled by a combination of two strategies: First, by generalizing Bayesian Conditionalization to minimizing an appropriate divergence between prior and posterior probability distribution. Second, by representing the relevant causal relations and the implied conditional independence relations in a Bayesian network that constrains both prior and posterior. We show that this approach solves several well-known puzzles about learning conditional evidence (e.g., the notorious Judy Benjamin problem) and that learning an indicative conditional can often be described adequately by conditionalizing on the associated material conditional.