Bio-Inspired, Task-Free Continual Learning through Activity Regularization

IF 1.7 4区 工程技术 Q3 COMPUTER SCIENCE, CYBERNETICS
Francesco Lassig, Pau Vilimelis Aceituno, M. Sorbaro, B. Grewe
{"title":"Bio-Inspired, Task-Free Continual Learning through Activity Regularization","authors":"Francesco Lassig, Pau Vilimelis Aceituno, M. Sorbaro, B. Grewe","doi":"10.48550/arXiv.2212.04316","DOIUrl":null,"url":null,"abstract":"The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.","PeriodicalId":55374,"journal":{"name":"Biological Cybernetics","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biological Cybernetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.48550/arXiv.2212.04316","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 1

Abstract

The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.
通过活动正则化实现生物启发、无任务的持续学习
连续学习多个任务而不忘记的能力是生物大脑的一项关键技能,而它代表了深度学习领域的主要挑战。为了避免灾难性遗忘,各种持续学习(CL)方法被设计出来。然而,这些通常需要离散的任务边界。这种要求在生物学上似乎是不合理的,并且常常限制了CL方法在任务并不总是定义良好的现实世界中的应用。在这里,我们从神经科学中获得灵感,在神经科学中,稀疏的、不重叠的神经元表征被认为可以防止灾难性的遗忘。正如在大脑中一样,我们认为这些稀疏表征应该在前馈(刺激特异性)和自上而下(上下文特异性)信息的基础上选择。为了实现这种选择性稀疏性,我们使用了一种生物合理的分层信用分配形式,称为深度反馈控制(DFC),并将其与赢家通吃的稀疏性机制相结合。除了稀疏性,我们还在每层中引入横向循环连接,以进一步保护先前学习过的表示。我们在分裂- mnist计算机视觉基准上评估了新的稀疏-循环版本的DFC,并表明只有稀疏性和层内循环连接的组合才能提高相对于标准反向传播的CL性能。我们的方法在不需要任务边界信息的情况下,实现了与众所周知的CL方法(如Elastic Weight Consolidation和Synaptic Intelligence)相似的性能。总的来说,我们展示了采用大脑的计算原理来为CL派生新的、无任务的学习算法的想法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biological Cybernetics
Biological Cybernetics 工程技术-计算机:控制论
CiteScore
3.50
自引率
5.30%
发文量
38
审稿时长
6-12 weeks
期刊介绍: Biological Cybernetics is an interdisciplinary medium for theoretical and application-oriented aspects of information processing in organisms, including sensory, motor, cognitive, and ecological phenomena. Topics covered include: mathematical modeling of biological systems; computational, theoretical or engineering studies with relevance for understanding biological information processing; and artificial implementation of biological information processing and self-organizing principles. Under the main aspects of performance and function of systems, emphasis is laid on communication between life sciences and technical/theoretical disciplines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信