{"title":"Representation learning with reward prediction errors","authors":"S. Gershman","doi":"10.51628/001c.37270","DOIUrl":"https://doi.org/10.51628/001c.37270","url":null,"abstract":"The Reward Prediction Error hypothesis proposes that phasic activity in the midbrain dopaminergic system reflects prediction errors needed for learning in reinforcement learning. Besides the well-documented association between dopamine and reward processing, dopamine is implicated in a variety of functions without a clear relationship to reward prediction error. Fluctuations in dopamine levels influence the subjective perception of time, dopamine bursts precede the generation of motor responses, and the dopaminergic system innervates regions of the brain, including hippocampus and areas in prefrontal cortex, whose function is not uniquely tied to reward. In this manuscript, we propose that a common theme linking these functions is representation, and that prediction errors signaled by the dopamine system, in addition to driving associative learning, can also support the acquisition of adaptive state representations. In a series of simulations, we show how this extension can account for the role of dopamine in temporal and spatial representation, motor response, and abstract categorization tasks. By extending the role of dopamine signals to learning state representations, we resolve a critical challenge to the Reward Prediction Error hypothesis of dopamine function.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82789402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian E Radillo, Alan Veliz-Cuba, Krešimir Josić, Zachary P Kilpatrick
{"title":"Performance of normative and approximate evidence accumulation on the dynamic clicks task.","authors":"Adrian E Radillo, Alan Veliz-Cuba, Krešimir Josić, Zachary P Kilpatrick","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The aim of a number of psychophysics tasks is to uncover how mammals make decisions in a world that is in flux. Here we examine the characteristics of ideal and near-ideal observers in a task of this type. We ask when and how performance depends on task parameters and design, and, in turn, what observer performance tells us about their decision-making process. In the dynamic clicks task subjects hear two streams (left and right) of Poisson clicks with different rates. Subjects are rewarded when they correctly identify the side with the higher rate, as this side switches unpredictably. We show that a reduced set of task parameters defines regions in parameter space in which optimal, but not near-optimal observers, maintain constant response accuracy. We also show that for a range of task parameters an approximate normative model must be finely tuned to reach near-optimal performance, illustrating a potential way to distinguish between normative models and their approximations. In addition, we show that using the negative log-likelihood and the 0/1-loss functions to fit these types of models is not equivalent: the 0/1-loss leads to a bias in parameter recovery that increases with sensory noise. These findings suggest ways to tease apart models that are hard to distinguish when tuned exactly, and point to general pitfalls in experimental design, model fitting, and interpretation of the resulting data.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7166050/pdf/nihms-1576728.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37850901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erik J Peterson, Necati Alp Müyesser, T. Verstynen, Kyle Dunovan
{"title":"Combining Imagination and Heuristics to Learn Strategies that Generalize","authors":"Erik J Peterson, Necati Alp Müyesser, T. Verstynen, Kyle Dunovan","doi":"10.51628/001c.13477","DOIUrl":"https://doi.org/10.51628/001c.13477","url":null,"abstract":"Deep reinforcement learning can match or exceed human performance in stable contexts, but with minor changes to the environment artificial networks, unlike humans, often cannot adapt. Humans rely on a combination of heuristics to simplify computational load and imagination to extend experiential learning to new and more challenging environments. Motivated by theories of the hierarchical organization of the human prefrontal networks, we have developed a model of hierarchical reinforcement learning that combines both heuristics and imagination into a “stumbler-strategist” network. We test performance of this network using Wythoff’s game, a gridworld environment with a known optimal strategy. We show that a heuristic labeling of each position as hot or cold, combined with imagined play, both accelerates learning and promotes transfer to novel games, while also improving model interpretability","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"128 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77798822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Subspace Invariance of Population Responses.","authors":"Elaine Tring, Dario L Ringach","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In cat visual cortex, the response of a neural population to the linear combination of two sinusoidal gratings (a plaid) can be well approximated by a weighted sum of the population responses to the individual gratings - a property we refer to as <i>subspace invariance</i>. We tested subspace invariance in mouse primary visual cortex by measuring the angle between the population response to a plaid and the plane spanned by the population responses to its individual components. We found robust violations of subspace invariance arising from a strong, negative correlation between the responses of neurons to individual gratings and their responses to the plaid. Contrast invariance, a special case of subspace invariance, also failed. The responses of some neurons decreased with increasing contrast, while others increased. Altogether the data show that subspace and contrast invariance do not hold in mouse primary visual cortex. These findings rule out some models of population coding, including vector averaging, some versions of normalization and temporal multiplexing.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10065745/pdf/nihms-1052229.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9281841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}