通过活动规格化进行生物启发的、无任务的持续学习。

IF 1.7 4区 工程技术 Q3 COMPUTER SCIENCE, CYBERNETICS
Biological Cybernetics Pub Date : 2023-10-01 Epub Date: 2023-08-17 DOI:10.1007/s00422-023-00973-w
Francesco Lässig, Pau Vilimelis Aceituno, Martino Sorbaro, Benjamin F Grewe
{"title":"通过活动规格化进行生物启发的、无任务的持续学习。","authors":"Francesco Lässig,&nbsp;Pau Vilimelis Aceituno,&nbsp;Martino Sorbaro,&nbsp;Benjamin F Grewe","doi":"10.1007/s00422-023-00973-w","DOIUrl":null,"url":null,"abstract":"<p><p>The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.</p>","PeriodicalId":55374,"journal":{"name":"Biological Cybernetics","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10600047/pdf/","citationCount":"1","resultStr":"{\"title\":\"Bio-inspired, task-free continual learning through activity regularization.\",\"authors\":\"Francesco Lässig,&nbsp;Pau Vilimelis Aceituno,&nbsp;Martino Sorbaro,&nbsp;Benjamin F Grewe\",\"doi\":\"10.1007/s00422-023-00973-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.</p>\",\"PeriodicalId\":55374,\"journal\":{\"name\":\"Biological Cybernetics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10600047/pdf/\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biological Cybernetics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s00422-023-00973-w\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/8/17 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biological Cybernetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s00422-023-00973-w","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/17 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 1

摘要

连续学习多个任务而不忘记的能力是生物大脑的一项关键技能,而这是对深度学习领域的一项重大挑战。为了避免灾难性遗忘,人们设计了各种持续学习方法。然而,这些通常需要离散的任务边界。这一要求在生物学上似乎是不可信的,并且经常限制CL方法在现实世界中的应用,因为现实世界中任务并不总是定义得很好。在这里,我们从神经科学中获得了灵感,在神经科学中,稀疏、不重叠的神经元表征被认为可以防止灾难性遗忘。与在大脑中一样,我们认为这些稀疏表示应该基于前馈(特定于刺激)和自上而下(特定于上下文)的信息来选择。为了实现这种选择性稀疏性,我们使用了一种称为深度反馈控制(DFC)的分层信用分配的生物合理形式,并将其与赢者通吃的稀疏性机制相结合。除了稀疏性之外,我们还在每一层中引入横向递归连接,以进一步保护先前学习的表示。我们在分裂的MNIST计算机视觉基准上评估了DFC的新稀疏递归版本,并表明相对于标准反向传播,只有稀疏性和层内递归连接的组合才能提高CL性能。我们的方法实现了与众所周知的CL方法类似的性能,如弹性权重合并和突触智能,而不需要有关任务边界的信息。总的来说,我们展示了从大脑中采用计算原理来推导CL的新的无任务学习算法的想法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Bio-inspired, task-free continual learning through activity regularization.

Bio-inspired, task-free continual learning through activity regularization.

The ability to sequentially learn multiple tasks without forgetting is a key skill of biological brains, whereas it represents a major challenge to the field of deep learning. To avoid catastrophic forgetting, various continual learning (CL) approaches have been devised. However, these usually require discrete task boundaries. This requirement seems biologically implausible and often limits the application of CL methods in the real world where tasks are not always well defined. Here, we take inspiration from neuroscience, where sparse, non-overlapping neuronal representations have been suggested to prevent catastrophic forgetting. As in the brain, we argue that these sparse representations should be chosen on the basis of feed forward (stimulus-specific) as well as top-down (context-specific) information. To implement such selective sparsity, we use a bio-plausible form of hierarchical credit assignment known as Deep Feedback Control (DFC) and combine it with a winner-take-all sparsity mechanism. In addition to sparsity, we introduce lateral recurrent connections within each layer to further protect previously learned representations. We evaluate the new sparse-recurrent version of DFC on the split-MNIST computer vision benchmark and show that only the combination of sparsity and intra-layer recurrent connections improves CL performance with respect to standard backpropagation. Our method achieves similar performance to well-known CL methods, such as Elastic Weight Consolidation and Synaptic Intelligence, without requiring information about task boundaries. Overall, we showcase the idea of adopting computational principles from the brain to derive new, task-free learning algorithms for CL.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biological Cybernetics
Biological Cybernetics 工程技术-计算机:控制论
CiteScore
3.50
自引率
5.30%
发文量
38
审稿时长
6-12 weeks
期刊介绍: Biological Cybernetics is an interdisciplinary medium for theoretical and application-oriented aspects of information processing in organisms, including sensory, motor, cognitive, and ecological phenomena. Topics covered include: mathematical modeling of biological systems; computational, theoretical or engineering studies with relevance for understanding biological information processing; and artificial implementation of biological information processing and self-organizing principles. Under the main aspects of performance and function of systems, emphasis is laid on communication between life sciences and technical/theoretical disciplines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信