一种纯粹的尖峰强化学习方法

IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Mikhail Kiselev , Alexander Ivanitsky , Denis Larionov
{"title":"一种纯粹的尖峰强化学习方法","authors":"Mikhail Kiselev ,&nbsp;Alexander Ivanitsky ,&nbsp;Denis Larionov","doi":"10.1016/j.cogsys.2024.101317","DOIUrl":null,"url":null,"abstract":"<div><div>At present, implementation of learning mechanisms in spiking neural networks (SNN) cannot be considered as a solved scientific problem despite plenty of SNN learning algorithms proposed. It is also true for SNN implementation of reinforcement learning (RL), while RL is especially important for SNNs because of its close relationship to the domains most promising from the viewpoint of SNN application such as robotics. In the present paper, an SNN structure is described which, seemingly, can be used in wide range of RL tasks. The distinctive feature of our approach is usage of only the spike forms of all signals involved — sensory input streams, output signals sent to actuators and reward/punishment signals. Besides that, selection of the neuron/plasticity models was determined by the requirement that they should be easily implemented on modern neurochips. The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model and a simple spike timing dependent synaptic plasticity model (a generalization of dopamine-modulated plasticity). In this study, we use the model-free approach to RL but it is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability (inside the class of model-free RL tasks). To test our SNN, we apply it to a simple but non-trivial task of training the network to keep a chaotically moving light spot in the view field of an emulated Dynamic Vision Sensor (DVS) camera. Successful solution of this RL problem can be considered as an evidence in favor of efficiency of our approach.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"89 ","pages":"Article 101317"},"PeriodicalIF":2.1000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A purely spiking approach to reinforcement learning\",\"authors\":\"Mikhail Kiselev ,&nbsp;Alexander Ivanitsky ,&nbsp;Denis Larionov\",\"doi\":\"10.1016/j.cogsys.2024.101317\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>At present, implementation of learning mechanisms in spiking neural networks (SNN) cannot be considered as a solved scientific problem despite plenty of SNN learning algorithms proposed. It is also true for SNN implementation of reinforcement learning (RL), while RL is especially important for SNNs because of its close relationship to the domains most promising from the viewpoint of SNN application such as robotics. In the present paper, an SNN structure is described which, seemingly, can be used in wide range of RL tasks. The distinctive feature of our approach is usage of only the spike forms of all signals involved — sensory input streams, output signals sent to actuators and reward/punishment signals. Besides that, selection of the neuron/plasticity models was determined by the requirement that they should be easily implemented on modern neurochips. The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model and a simple spike timing dependent synaptic plasticity model (a generalization of dopamine-modulated plasticity). In this study, we use the model-free approach to RL but it is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability (inside the class of model-free RL tasks). To test our SNN, we apply it to a simple but non-trivial task of training the network to keep a chaotically moving light spot in the view field of an emulated Dynamic Vision Sensor (DVS) camera. Successful solution of this RL problem can be considered as an evidence in favor of efficiency of our approach.</div></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":\"89 \",\"pages\":\"Article 101317\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041724001116\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724001116","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

目前,尽管提出了大量的SNN学习算法,但SNN学习机制的实现仍不能被视为一个已解决的科学问题。对于SNN实现强化学习(RL)也是如此,而RL对于SNN尤其重要,因为它与SNN应用(如机器人)最有前途的领域密切相关。在本文中,描述了一种SNN结构,它似乎可以用于广泛的强化学习任务。我们的方法的独特之处在于只使用所有相关信号的尖峰形式——感官输入流、发送给执行器的输出信号和奖励/惩罚信号。此外,神经元/可塑性模型的选择是由易于在现代神经芯片上实现的要求决定的。本文考虑的SNN结构包括由LIFAT(具有自适应阈值的泄漏整合-放电神经元)模型的推广描述的尖峰神经元和一个简单的尖峰时间依赖的突触可塑性模型(多巴胺调节可塑性的推广)。在本研究中,我们使用无模型方法进行强化学习,但它是基于关于强化学习任务特征的非常一般的假设,并且在其适用性(在无模型强化学习任务类中)上没有明显的限制。为了测试我们的SNN,我们将其应用于一个简单但重要的任务,即训练网络在模拟动态视觉传感器(DVS)相机的视场中保持混乱移动的光点。这个RL问题的成功解决可以被认为是支持我们的方法效率的证据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A purely spiking approach to reinforcement learning
At present, implementation of learning mechanisms in spiking neural networks (SNN) cannot be considered as a solved scientific problem despite plenty of SNN learning algorithms proposed. It is also true for SNN implementation of reinforcement learning (RL), while RL is especially important for SNNs because of its close relationship to the domains most promising from the viewpoint of SNN application such as robotics. In the present paper, an SNN structure is described which, seemingly, can be used in wide range of RL tasks. The distinctive feature of our approach is usage of only the spike forms of all signals involved — sensory input streams, output signals sent to actuators and reward/punishment signals. Besides that, selection of the neuron/plasticity models was determined by the requirement that they should be easily implemented on modern neurochips. The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model and a simple spike timing dependent synaptic plasticity model (a generalization of dopamine-modulated plasticity). In this study, we use the model-free approach to RL but it is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability (inside the class of model-free RL tasks). To test our SNN, we apply it to a simple but non-trivial task of training the network to keep a chaotically moving light spot in the view field of an emulated Dynamic Vision Sensor (DVS) camera. Successful solution of this RL problem can be considered as an evidence in favor of efficiency of our approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Systems Research
Cognitive Systems Research 工程技术-计算机:人工智能
CiteScore
9.40
自引率
5.10%
发文量
40
审稿时长
>12 weeks
期刊介绍: Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial. The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition. Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信