尖峰神经网络中基于身体表征的有效奖励学习

Yuji Kawai, Tomohiro Takimoto, Jihoon Park, M. Asada
{"title":"尖峰神经网络中基于身体表征的有效奖励学习","authors":"Yuji Kawai, Tomohiro Takimoto, Jihoon Park, M. Asada","doi":"10.1109/DEVLRN.2018.8761011","DOIUrl":null,"url":null,"abstract":"Brain-body interactions guide the development of behavioral and cognitive functions. Sensory signals during behavior are relayed to the brain and evoke neural activity. This feedback is important for the organization of neural networks via neural plasticity, which in turn facilitates the generation of motor commands for new behaviors. In this study, we investigated how brain-body interactions develop and affect reward-based learning. We constructed a spiking neural network (SNN) model for the reward-based learning of canonical babbling, i.e., combination of a vowel and consonant. Motor commands to a vocal simulator were generated by SNN output and auditory signals representing the vocalized sound were fed back into the SNN. Synaptic weights in the SNN were updated using spike-timing-dependent plasticity (STDP). Connections from the SNN to the vocal simulator were modulated based on reward signals in terms of saliency of the vocalized sound. Our results showed that, under auditory feedback, STDP enabled the model to rapidly acquire babbling-like vocalization. We found that some neurons in the SNN were more highly activated during vocalization of a consonant than during other sounds. That is, neural dynamics in the SNN adapted to task-related articulator movements. Accordingly, body representation in the SNN facilitated brain-body interaction and accelerated the acquisition of babbling behavior.","PeriodicalId":236346,"journal":{"name":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Efficient Reward-Based Learning through Body Representation in a Spiking Neural Network\",\"authors\":\"Yuji Kawai, Tomohiro Takimoto, Jihoon Park, M. Asada\",\"doi\":\"10.1109/DEVLRN.2018.8761011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain-body interactions guide the development of behavioral and cognitive functions. Sensory signals during behavior are relayed to the brain and evoke neural activity. This feedback is important for the organization of neural networks via neural plasticity, which in turn facilitates the generation of motor commands for new behaviors. In this study, we investigated how brain-body interactions develop and affect reward-based learning. We constructed a spiking neural network (SNN) model for the reward-based learning of canonical babbling, i.e., combination of a vowel and consonant. Motor commands to a vocal simulator were generated by SNN output and auditory signals representing the vocalized sound were fed back into the SNN. Synaptic weights in the SNN were updated using spike-timing-dependent plasticity (STDP). Connections from the SNN to the vocal simulator were modulated based on reward signals in terms of saliency of the vocalized sound. Our results showed that, under auditory feedback, STDP enabled the model to rapidly acquire babbling-like vocalization. We found that some neurons in the SNN were more highly activated during vocalization of a consonant than during other sounds. That is, neural dynamics in the SNN adapted to task-related articulator movements. Accordingly, body representation in the SNN facilitated brain-body interaction and accelerated the acquisition of babbling behavior.\",\"PeriodicalId\":236346,\"journal\":{\"name\":\"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)\",\"volume\":\"47 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2018.8761011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2018.8761011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

脑-体相互作用指导行为和认知功能的发展。行为过程中的感觉信号被传递到大脑并引起神经活动。这种反馈通过神经可塑性对神经网络的组织很重要,这反过来又促进了新行为的运动命令的产生。在这项研究中,我们研究了脑-体相互作用是如何发展和影响基于奖励的学习的。我们构建了一个尖峰神经网络(SNN)模型,用于基于奖励的规范咿呀学语学习,即元音和辅音的组合。通过SNN输出产生对声音模拟器的运动命令,并将代表发声声音的听觉信号反馈到SNN中。SNN中的突触权重使用spike- time -dependent plasticity (STDP)进行更新。从SNN到发声模拟器的连接是根据发声声音的显著性来调节的。我们的研究结果表明,在听觉反馈下,STDP使模型能够快速获得类似咿呀学语的发声。我们发现,在发出辅音时,SNN中的一些神经元比发出其他声音时更加活跃。也就是说,SNN中的神经动力学适应与任务相关的发音器运动。因此,SNN中的身体表征促进了脑-体相互作用,加速了咿呀学语行为的习得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficient Reward-Based Learning through Body Representation in a Spiking Neural Network
Brain-body interactions guide the development of behavioral and cognitive functions. Sensory signals during behavior are relayed to the brain and evoke neural activity. This feedback is important for the organization of neural networks via neural plasticity, which in turn facilitates the generation of motor commands for new behaviors. In this study, we investigated how brain-body interactions develop and affect reward-based learning. We constructed a spiking neural network (SNN) model for the reward-based learning of canonical babbling, i.e., combination of a vowel and consonant. Motor commands to a vocal simulator were generated by SNN output and auditory signals representing the vocalized sound were fed back into the SNN. Synaptic weights in the SNN were updated using spike-timing-dependent plasticity (STDP). Connections from the SNN to the vocal simulator were modulated based on reward signals in terms of saliency of the vocalized sound. Our results showed that, under auditory feedback, STDP enabled the model to rapidly acquire babbling-like vocalization. We found that some neurons in the SNN were more highly activated during vocalization of a consonant than during other sounds. That is, neural dynamics in the SNN adapted to task-related articulator movements. Accordingly, body representation in the SNN facilitated brain-body interaction and accelerated the acquisition of babbling behavior.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信