密集联想记忆中的顺序学习。

IF 2.1 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Hayden McAlister;Anthony Robins;Lech Szymanski
{"title":"密集联想记忆中的顺序学习。","authors":"Hayden McAlister;Anthony Robins;Lech Szymanski","doi":"10.1162/neco.a.20","DOIUrl":null,"url":null,"abstract":"Sequential learning involves learning tasks in a sequence and proves challenging for most neural networks. Biological neural networks regularly succeed at the sequential learning challenge and are even capable of transferring knowledge both forward and backward between tasks. Artificial neural networks often totally fail to transfer performance between tasks and regularly suffer from degraded performance or catastrophic forgetting on previous tasks. Models of associative memory have been used to investigate the discrepancy between biological and artificial neural networks due to their biological ties and inspirations, of which the Hopfield network is the most studied model. The dense associative memory (DAM), or modern Hopfield network, generalizes the Hopfield network, allowing for greater capacities and prototype learning behaviors while still retaining the associative memory structure. We give a substantial review of the sequential learning space with particular respect to the Hopfield network and associative memories. We present the first published benchmarks of sequential learning in the DAM using various sequential learning techniques and analyze the results of the sequential learning to demonstrate previously unseen transitions in the behavior of the DAM. This letter also discusses the departure from biological plausibility that may affect the utility of the DAM as a tool for studying biological neural networks. We present our findings, including the effectiveness of a range of state-of-the-art sequential learning methods when applied to the DAM, and use these methods to further the understanding of DAM properties and behaviors.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 10","pages":"1877-1924"},"PeriodicalIF":2.1000,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sequential Learning in the Dense Associative Memory\",\"authors\":\"Hayden McAlister;Anthony Robins;Lech Szymanski\",\"doi\":\"10.1162/neco.a.20\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sequential learning involves learning tasks in a sequence and proves challenging for most neural networks. Biological neural networks regularly succeed at the sequential learning challenge and are even capable of transferring knowledge both forward and backward between tasks. Artificial neural networks often totally fail to transfer performance between tasks and regularly suffer from degraded performance or catastrophic forgetting on previous tasks. Models of associative memory have been used to investigate the discrepancy between biological and artificial neural networks due to their biological ties and inspirations, of which the Hopfield network is the most studied model. The dense associative memory (DAM), or modern Hopfield network, generalizes the Hopfield network, allowing for greater capacities and prototype learning behaviors while still retaining the associative memory structure. We give a substantial review of the sequential learning space with particular respect to the Hopfield network and associative memories. We present the first published benchmarks of sequential learning in the DAM using various sequential learning techniques and analyze the results of the sequential learning to demonstrate previously unseen transitions in the behavior of the DAM. This letter also discusses the departure from biological plausibility that may affect the utility of the DAM as a tool for studying biological neural networks. We present our findings, including the effectiveness of a range of state-of-the-art sequential learning methods when applied to the DAM, and use these methods to further the understanding of DAM properties and behaviors.\",\"PeriodicalId\":54731,\"journal\":{\"name\":\"Neural Computation\",\"volume\":\"37 10\",\"pages\":\"1877-1924\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-09-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Computation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11180096/\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computation","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11180096/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

顺序学习包括按顺序学习任务,这对大多数神经网络来说都是具有挑战性的。生物神经网络通常在顺序学习挑战中取得成功,甚至能够在任务之间向前和向后传递知识。人工神经网络常常完全不能在任务之间传递性能,并且经常遭受性能下降或对先前任务的灾难性遗忘。联想记忆模型被用来研究生物神经网络和人工神经网络之间的差异,因为它们的生物联系和灵感,其中Hopfield网络是研究最多的模型。密集联想记忆(DAM),或现代Hopfield网络,概括了Hopfield网络,在保留联想记忆结构的同时,允许更大的容量和原型学习行为。我们对顺序学习空间进行了实质性的回顾,特别是关于Hopfield网络和联想记忆。我们提出了DAM中使用各种顺序学习技术的顺序学习的第一个公开基准,并分析了顺序学习的结果,以展示DAM行为中以前未见过的转变。这封信还讨论了可能影响DAM作为研究生物神经网络工具的效用的偏离生物学的合理性。我们展示了我们的发现,包括一系列最先进的顺序学习方法在应用于DAM时的有效性,并使用这些方法进一步了解DAM的特性和行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sequential Learning in the Dense Associative Memory
Sequential learning involves learning tasks in a sequence and proves challenging for most neural networks. Biological neural networks regularly succeed at the sequential learning challenge and are even capable of transferring knowledge both forward and backward between tasks. Artificial neural networks often totally fail to transfer performance between tasks and regularly suffer from degraded performance or catastrophic forgetting on previous tasks. Models of associative memory have been used to investigate the discrepancy between biological and artificial neural networks due to their biological ties and inspirations, of which the Hopfield network is the most studied model. The dense associative memory (DAM), or modern Hopfield network, generalizes the Hopfield network, allowing for greater capacities and prototype learning behaviors while still retaining the associative memory structure. We give a substantial review of the sequential learning space with particular respect to the Hopfield network and associative memories. We present the first published benchmarks of sequential learning in the DAM using various sequential learning techniques and analyze the results of the sequential learning to demonstrate previously unseen transitions in the behavior of the DAM. This letter also discusses the departure from biological plausibility that may affect the utility of the DAM as a tool for studying biological neural networks. We present our findings, including the effectiveness of a range of state-of-the-art sequential learning methods when applied to the DAM, and use these methods to further the understanding of DAM properties and behaviors.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Computation
Neural Computation 工程技术-计算机:人工智能
CiteScore
6.30
自引率
3.40%
发文量
83
审稿时长
3.0 months
期刊介绍: Neural Computation is uniquely positioned at the crossroads between neuroscience and TMCS and welcomes the submission of original papers from all areas of TMCS, including: Advanced experimental design; Analysis of chemical sensor data; Connectomic reconstructions; Analysis of multielectrode and optical recordings; Genetic data for cell identity; Analysis of behavioral data; Multiscale models; Analysis of molecular mechanisms; Neuroinformatics; Analysis of brain imaging data; Neuromorphic engineering; Principles of neural coding, computation, circuit dynamics, and plasticity; Theories of brain function.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信