环境变化下自主学习的进化可塑性。

IF 4.6 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, Matt Coler, George Fletcher, Mykola Pechenizkiy
{"title":"环境变化下自主学习的进化可塑性。","authors":"Anil Yaman,&nbsp;Giovanni Iacca,&nbsp;Decebal Constantin Mocanu,&nbsp;Matt Coler,&nbsp;George Fletcher,&nbsp;Mykola Pechenizkiy","doi":"10.1162/evco_a_00286","DOIUrl":null,"url":null,"abstract":"<p><p>A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"29 3","pages":"391-414"},"PeriodicalIF":4.6000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions.\",\"authors\":\"Anil Yaman,&nbsp;Giovanni Iacca,&nbsp;Decebal Constantin Mocanu,&nbsp;Matt Coler,&nbsp;George Fletcher,&nbsp;Mykola Pechenizkiy\",\"doi\":\"10.1162/evco_a_00286\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.</p>\",\"PeriodicalId\":50470,\"journal\":{\"name\":\"Evolutionary Computation\",\"volume\":\"29 3\",\"pages\":\"391-414\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evolutionary Computation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1162/evco_a_00286\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/evco_a_00286","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 11

摘要

生物神经网络学习的一个基本方面是可塑性,这使它们能够在其生命周期中修改其配置。Hebbian学习是一种基于神经元局部相互作用的模拟人工神经网络(ann)可塑性的生物学机制。然而,从局部Hebbian可塑性规则中出现的连贯的全局学习行为尚未得到很好的理解。这项工作的目标是发现可解释的局部Hebbian学习规则,可以提供自主的全局学习。为了实现这一点,我们在有限的搜索空间中使用离散表示来编码学习规则。然后,这些规则被用来根据神经元的局部相互作用来执行突触变化。我们使用遗传算法来优化这些规则,以便在在线终身学习设置中对两个独立的任务(觅食和捕食场景)进行学习。由此产生的演化规则汇聚成一组定义良好的可解释类型,并对其进行了详细的讨论。值得注意的是,当这些规则在学习任务中适应人工神经网络时,其性能与离线学习方法(如爬山)相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions.

A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Evolutionary Computation
Evolutionary Computation 工程技术-计算机:理论方法
CiteScore
6.40
自引率
1.50%
发文量
20
审稿时长
3 months
期刊介绍: Evolutionary Computation is a leading journal in its field. It provides an international forum for facilitating and enhancing the exchange of information among researchers involved in both the theoretical and practical aspects of computational systems drawing their inspiration from nature, with particular emphasis on evolutionary models of computation such as genetic algorithms, evolutionary strategies, classifier systems, evolutionary programming, and genetic programming. It welcomes articles from related fields such as swarm intelligence (e.g. Ant Colony Optimization and Particle Swarm Optimization), and other nature-inspired computation paradigms (e.g. Artificial Immune Systems). As well as publishing articles describing theoretical and/or experimental work, the journal also welcomes application-focused papers describing breakthrough results in an application domain or methodological papers where the specificities of the real-world problem led to significant algorithmic improvements that could possibly be generalized to other areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信