Neurons, behavior, data analysis and theory最新文献

筛选
英文 中文
Representation learning with reward prediction errors 具有奖励预测误差的表征学习
Neurons, behavior, data analysis and theory Pub Date : 2019-01-23 DOI: 10.51628/001c.37270
S. Gershman
{"title":"Representation learning with reward prediction errors","authors":"S. Gershman","doi":"10.51628/001c.37270","DOIUrl":"https://doi.org/10.51628/001c.37270","url":null,"abstract":"The Reward Prediction Error hypothesis proposes that phasic activity in the midbrain dopaminergic system reflects prediction errors needed for learning in reinforcement learning. Besides the well-documented association between dopamine and reward processing, dopamine is implicated in a variety of functions without a clear relationship to reward prediction error. Fluctuations in dopamine levels influence the subjective perception of time, dopamine bursts precede the generation of motor responses, and the dopaminergic system innervates regions of the brain, including hippocampus and areas in prefrontal cortex, whose function is not uniquely tied to reward. In this manuscript, we propose that a common theme linking these functions is representation, and that prediction errors signaled by the dopamine system, in addition to driving associative learning, can also support the acquisition of adaptive state representations. In a series of simulations, we show how this extension can account for the role of dopamine in temporal and spatial representation, motor response, and abstract categorization tasks. By extending the role of dopamine signals to learning state representations, we resolve a critical challenge to the Reward Prediction Error hypothesis of dopamine function.","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82789402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Performance of normative and approximate evidence accumulation on the dynamic clicks task. 规范和近似证据积累在动态点击任务中的表现。
Neurons, behavior, data analysis and theory Pub Date : 2019-01-01 Epub Date: 2019-10-09
Adrian E Radillo, Alan Veliz-Cuba, Krešimir Josić, Zachary P Kilpatrick
{"title":"Performance of normative and approximate evidence accumulation on the dynamic clicks task.","authors":"Adrian E Radillo,&nbsp;Alan Veliz-Cuba,&nbsp;Krešimir Josić,&nbsp;Zachary P Kilpatrick","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The aim of a number of psychophysics tasks is to uncover how mammals make decisions in a world that is in flux. Here we examine the characteristics of ideal and near-ideal observers in a task of this type. We ask when and how performance depends on task parameters and design, and, in turn, what observer performance tells us about their decision-making process. In the dynamic clicks task subjects hear two streams (left and right) of Poisson clicks with different rates. Subjects are rewarded when they correctly identify the side with the higher rate, as this side switches unpredictably. We show that a reduced set of task parameters defines regions in parameter space in which optimal, but not near-optimal observers, maintain constant response accuracy. We also show that for a range of task parameters an approximate normative model must be finely tuned to reach near-optimal performance, illustrating a potential way to distinguish between normative models and their approximations. In addition, we show that using the negative log-likelihood and the 0/1-loss functions to fit these types of models is not equivalent: the 0/1-loss leads to a bias in parameter recovery that increases with sensory noise. These findings suggest ways to tease apart models that are hard to distinguish when tuned exactly, and point to general pitfalls in experimental design, model fitting, and interpretation of the resulting data.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7166050/pdf/nihms-1576728.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37850901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Imagination and Heuristics to Learn Strategies that Generalize 结合想象力和启发式学习泛化策略
Neurons, behavior, data analysis and theory Pub Date : 2018-09-10 DOI: 10.51628/001c.13477
Erik J Peterson, Necati Alp Müyesser, T. Verstynen, Kyle Dunovan
{"title":"Combining Imagination and Heuristics to Learn Strategies that Generalize","authors":"Erik J Peterson, Necati Alp Müyesser, T. Verstynen, Kyle Dunovan","doi":"10.51628/001c.13477","DOIUrl":"https://doi.org/10.51628/001c.13477","url":null,"abstract":"Deep reinforcement learning can match or exceed human performance in stable contexts, but with minor changes to the environment artificial networks, unlike humans, often cannot adapt. Humans rely on a combination of heuristics to simplify computational load and imagination to extend experiential learning to new and more challenging environments. Motivated by theories of the hierarchical organization of the human prefrontal networks, we have developed a model of hierarchical reinforcement learning that combines both heuristics and imagination into a “stumbler-strategist” network. We test performance of this network using Wythoff’s game, a gridworld environment with a known optimal strategy. We show that a heuristic labeling of each position as hot or cold, combined with imagined play, both accelerates learning and promotes transfer to novel games, while also improving model interpretability","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77798822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Subspace Invariance of Population Responses. 关于总体响应的子空间不变性。
Elaine Tring, Dario L Ringach
{"title":"On the Subspace Invariance of Population Responses.","authors":"Elaine Tring,&nbsp;Dario L Ringach","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In cat visual cortex, the response of a neural population to the linear combination of two sinusoidal gratings (a plaid) can be well approximated by a weighted sum of the population responses to the individual gratings - a property we refer to as <i>subspace invariance</i>. We tested subspace invariance in mouse primary visual cortex by measuring the angle between the population response to a plaid and the plane spanned by the population responses to its individual components. We found robust violations of subspace invariance arising from a strong, negative correlation between the responses of neurons to individual gratings and their responses to the plaid. Contrast invariance, a special case of subspace invariance, also failed. The responses of some neurons decreased with increasing contrast, while others increased. Altogether the data show that subspace and contrast invariance do not hold in mouse primary visual cortex. These findings rule out some models of population coding, including vector averaging, some versions of normalization and temporal multiplexing.</p>","PeriodicalId":74289,"journal":{"name":"Neurons, behavior, data analysis and theory","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10065745/pdf/nihms-1052229.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9281841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信