{"title":"Bayesian Learning","authors":"D. Simovici","doi":"10.1017/9781139879354.002","DOIUrl":"https://doi.org/10.1017/9781139879354.002","url":null,"abstract":"10 Theoretical Background 11 Bayesian methods have undergone tremendous progress 12 in recent years, due largely to mathematical advances in 13 probability and estimation theory (Chater et al. 2006). 14 These advances have allowed theorists to express and 15 derive predictions from far more sophisticated models 16 than previously possible. These models have generated 17 a good deal of excitement for at least two reasons. First, 18 they offer a new interpretation of the goals of cognitive 19 systems, in terms of inductive probabilistic inference, 20 which has revived attempts at rational explanation of 21 human behavior (Oaksford and Chater 2007). Second, 22 Bayesian models may have the potential to explain some 23 of the most complex aspects of human cognition, such as 24 language acquisition or reasoning under uncertainty, 25 where structured information and incomplete knowledge 26 combine in a way that has defied previous approaches 27 (e.g., Kemp and Tenenbaum 2008). 28 Constructing a Bayesianmodel involves two steps. The 29 first step is to specify the set of possibilities for the state of 30 the world, which is referred to as the hypothesis space. 31 Each hypothesis can be thought of as a prediction by the 32 subject about what future sensory information will be 33 encountered. However, the term hypothesis should not 34 be confused with its more traditional usage in psychology, 35 connoting explicit testing of rules or other symbolically 36 represented propositions. In the context of Bayesian 37 modeling, hypotheses need have nothing to do with 38 explicit reasoning, and indeed the Bayesian framework 39 makes no commitment whatsoever on this issue. 40 For example, in Bayesian models of visual processing, 41 hypotheses can correspond to extremely low-level infor42 mation, such as the presence of elementary visual features 43 (contours, etc.) at various locations in the visual field 44 (Geisler et al. 2001). There is also no commitment regard45 ing where the hypotheses come from. Hypotheses could 46 represent innate biases or knowledge, or they could have 47 been learned previously by the individual. Thus, the 48 framework has no position on nativist–empiricist debates. 49 Furthermore, hypotheses representing very different types 50 of information (e.g., a contour in a particular location, 51 whether or not the image reminds you of your mother, 52 whether the image is symmetrical, whether it spells 53 a particular word, etc.) are all lumped together in 54 a common hypothesis space and treated equally by the 55 model. Thus, there is no distinction between different 56 types of representations or knowledge systems within the 57 brain. In general, a hypothesis is nothing more than 58 a probability distribution. This distribution, referred to 59 as the likelihood function, simply specifies how likely each 60 possible pattern of observations is according to the 61 hypothesis in question. 62 The second step in constructing a Bayesian model is to 63 ","PeriodicalId":262698,"journal":{"name":"Variational Bayesian Learning Theory","volume":"20 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132112767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VB Algorithm under No Conjugacy","authors":"","doi":"10.1017/9781139879354.006","DOIUrl":"https://doi.org/10.1017/9781139879354.006","url":null,"abstract":"","PeriodicalId":262698,"journal":{"name":"Variational Bayesian Learning Theory","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131251304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Asymptotic VB Theory of Other Latent Variable Models","authors":"","doi":"10.1017/9781139879354.017","DOIUrl":"https://doi.org/10.1017/9781139879354.017","url":null,"abstract":"","PeriodicalId":262698,"journal":{"name":"Variational Bayesian Learning Theory","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127755392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}