Neuro-Inspired Reinforcement Learning to Improve Trajectory Prediction in Reward-Guided Behavior.

IF 6.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
International Journal of Neural Systems Pub Date : 2022-09-01 Epub Date: 2022-08-19 DOI:10.1142/S0129065722500381
Bo-Wei Chen, Shih-Hung Yang, Chao-Hung Kuo, Jia-Wei Chen, Yu-Chun Lo, Yun-Ting Kuo, Yi-Chen Lin, Hao-Cheng Chang, Sheng-Huang Lin, Xiao Yu, Boyi Qu, Shuan-Chu Vina Ro, Hsin-Yi Lai, You-Yin Chen
{"title":"Neuro-Inspired Reinforcement Learning to Improve Trajectory Prediction in Reward-Guided Behavior.","authors":"Bo-Wei Chen,&nbsp;Shih-Hung Yang,&nbsp;Chao-Hung Kuo,&nbsp;Jia-Wei Chen,&nbsp;Yu-Chun Lo,&nbsp;Yun-Ting Kuo,&nbsp;Yi-Chen Lin,&nbsp;Hao-Cheng Chang,&nbsp;Sheng-Huang Lin,&nbsp;Xiao Yu,&nbsp;Boyi Qu,&nbsp;Shuan-Chu Vina Ro,&nbsp;Hsin-Yi Lai,&nbsp;You-Yin Chen","doi":"10.1142/S0129065722500381","DOIUrl":null,"url":null,"abstract":"<p><p>Hippocampal pyramidal cells and interneurons play a key role in spatial navigation. In goal-directed behavior associated with rewards, the spatial firing pattern of pyramidal cells is modulated by the animal's moving direction toward a reward, with a dependence on auditory, olfactory, and somatosensory stimuli for head orientation. Additionally, interneurons in the CA1 region of the hippocampus monosynaptically connected to CA1 pyramidal cells are modulated by a complex set of interacting brain regions related to reward and recall. The computational method of reinforcement learning (RL) has been widely used to investigate spatial navigation, which in turn has been increasingly used to study rodent learning associated with the reward. The rewards in RL are used for discovering a desired behavior through the integration of two streams of neural activity: trial-and-error interactions with the external environment to achieve a goal, and the intrinsic motivation primarily driven by brain reward system to accelerate learning. Recognizing the potential benefit of the neural representation of this reward design for novel RL architectures, we propose a RL algorithm based on [Formula: see text]-learning with a perspective on biomimetics (neuro-inspired RL) to decode rodent movement trajectories. The reward function, inspired by the neuronal information processing uncovered in the hippocampus, combines the preferred direction of pyramidal cell firing as the extrinsic reward signal with the coupling between pyramidal cell-interneuron pairs as the intrinsic reward signal. Our experimental results demonstrate that the <i>neuro-inspired</i> RL, with a combined use of extrinsic and intrinsic rewards, outperforms other spatial decoding algorithms, including RL methods that use a single reward function. The new RL algorithm could help accelerate learning convergence rates and improve the prediction accuracy for moving trajectories.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"32 9","pages":"2250038"},"PeriodicalIF":6.6000,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Neural Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/S0129065722500381","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/8/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 2

Abstract

Hippocampal pyramidal cells and interneurons play a key role in spatial navigation. In goal-directed behavior associated with rewards, the spatial firing pattern of pyramidal cells is modulated by the animal's moving direction toward a reward, with a dependence on auditory, olfactory, and somatosensory stimuli for head orientation. Additionally, interneurons in the CA1 region of the hippocampus monosynaptically connected to CA1 pyramidal cells are modulated by a complex set of interacting brain regions related to reward and recall. The computational method of reinforcement learning (RL) has been widely used to investigate spatial navigation, which in turn has been increasingly used to study rodent learning associated with the reward. The rewards in RL are used for discovering a desired behavior through the integration of two streams of neural activity: trial-and-error interactions with the external environment to achieve a goal, and the intrinsic motivation primarily driven by brain reward system to accelerate learning. Recognizing the potential benefit of the neural representation of this reward design for novel RL architectures, we propose a RL algorithm based on [Formula: see text]-learning with a perspective on biomimetics (neuro-inspired RL) to decode rodent movement trajectories. The reward function, inspired by the neuronal information processing uncovered in the hippocampus, combines the preferred direction of pyramidal cell firing as the extrinsic reward signal with the coupling between pyramidal cell-interneuron pairs as the intrinsic reward signal. Our experimental results demonstrate that the neuro-inspired RL, with a combined use of extrinsic and intrinsic rewards, outperforms other spatial decoding algorithms, including RL methods that use a single reward function. The new RL algorithm could help accelerate learning convergence rates and improve the prediction accuracy for moving trajectories.

神经启发的强化学习改进奖励引导行为的轨迹预测。
海马锥体细胞和中间神经元在空间导航中起关键作用。在与奖励相关的目标导向行为中,锥体细胞的空间放电模式受到动物朝向奖励的移动方向的调节,并依赖于听觉、嗅觉和体感刺激来确定头部方向。此外,海马CA1区与CA1锥体细胞单突触连接的中间神经元受一组复杂的与奖励和回忆相关的相互作用的大脑区域的调节。强化学习(RL)的计算方法已被广泛用于研究空间导航,进而越来越多地用于研究与奖励相关的啮齿动物学习。强化学习中的奖励用于通过整合两种神经活动流来发现期望的行为:与外部环境进行试错交互以实现目标,以及主要由大脑奖励系统驱动以加速学习的内在动机。认识到这种奖励设计的神经表征对新型RL架构的潜在好处,我们提出了一种基于[公式:见文本]的RL算法,该算法基于仿生学(神经启发RL)的视角来解码啮齿动物的运动轨迹。奖励功能受海马神经元信息处理的启发,将锥体细胞发射的优先方向作为外在奖励信号与锥体-中间神经元对之间的耦合作为内在奖励信号相结合。我们的实验结果表明,结合使用外在和内在奖励的神经激励RL优于其他空间解码算法,包括使用单一奖励函数的RL方法。新的强化学习算法可以帮助加快学习收敛速度,提高运动轨迹的预测精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Neural Systems
International Journal of Neural Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
28.80%
发文量
116
审稿时长
24 months
期刊介绍: The International Journal of Neural Systems is a monthly, rigorously peer-reviewed transdisciplinary journal focusing on information processing in both natural and artificial neural systems. Special interests include machine learning, computational neuroscience and neurology. The journal prioritizes innovative, high-impact articles spanning multiple fields, including neurosciences and computer science and engineering. It adopts an open-minded approach to this multidisciplinary field, serving as a platform for novel ideas and enhanced understanding of collective and cooperative phenomena in computationally capable systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信