分析吸引器神经网络对外部输入冲突的反应

IF 2.3 4区 医学 Q1 Neuroscience
Kathryn Hedrick, Kechen Zhang
{"title":"分析吸引器神经网络对外部输入冲突的反应","authors":"Kathryn Hedrick, Kechen Zhang","doi":"10.1186/s13408-018-0061-0","DOIUrl":null,"url":null,"abstract":"<p><p>The theory of attractor neural networks has been influential in our understanding of the neural processes underlying spatial, declarative, and episodic memory. Many theoretical studies focus on the inherent properties of an attractor, such as its structure and capacity. Relatively little is known about how an attractor neural network responds to external inputs, which often carry conflicting information about a stimulus. In this paper we analyze the behavior of an attractor neural network driven by two conflicting external inputs. Our focus is on analyzing the emergent properties of the megamap model, a quasi-continuous attractor network in which place cells are flexibly recombined to represent a large spatial environment. In this model, the system shows a sharp transition from the winner-take-all mode, which is characteristic of standard continuous attractor neural networks, to a combinatorial mode in which the equilibrium activity pattern combines embedded attractor states in response to conflicting external inputs. We derive a numerical test for determining the operational mode of the system a priori. We then derive a linear transformation from the full megamap model with thousands of neurons to a reduced 2-unit model that has similar qualitative behavior. Our analysis of the reduced model and explicit expressions relating the parameters of the reduced model to the megamap elucidate the conditions under which the combinatorial mode emerges and the dynamics in each mode given the relative strength of the attractor network and the relative strength of the two conflicting inputs. Although we focus on a particular attractor network model, we describe a set of conditions under which our analysis can be applied to more general attractor neural networks.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2018-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5955911/pdf/","citationCount":"0","resultStr":"{\"title\":\"Analysis of an Attractor Neural Network's Response to Conflicting External Inputs.\",\"authors\":\"Kathryn Hedrick, Kechen Zhang\",\"doi\":\"10.1186/s13408-018-0061-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The theory of attractor neural networks has been influential in our understanding of the neural processes underlying spatial, declarative, and episodic memory. Many theoretical studies focus on the inherent properties of an attractor, such as its structure and capacity. Relatively little is known about how an attractor neural network responds to external inputs, which often carry conflicting information about a stimulus. In this paper we analyze the behavior of an attractor neural network driven by two conflicting external inputs. Our focus is on analyzing the emergent properties of the megamap model, a quasi-continuous attractor network in which place cells are flexibly recombined to represent a large spatial environment. In this model, the system shows a sharp transition from the winner-take-all mode, which is characteristic of standard continuous attractor neural networks, to a combinatorial mode in which the equilibrium activity pattern combines embedded attractor states in response to conflicting external inputs. We derive a numerical test for determining the operational mode of the system a priori. We then derive a linear transformation from the full megamap model with thousands of neurons to a reduced 2-unit model that has similar qualitative behavior. Our analysis of the reduced model and explicit expressions relating the parameters of the reduced model to the megamap elucidate the conditions under which the combinatorial mode emerges and the dynamics in each mode given the relative strength of the attractor network and the relative strength of the two conflicting inputs. Although we focus on a particular attractor network model, we describe a set of conditions under which our analysis can be applied to more general attractor neural networks.</p>\",\"PeriodicalId\":54271,\"journal\":{\"name\":\"Journal of Mathematical Neuroscience\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2018-05-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5955911/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Mathematical Neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s13408-018-0061-0\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Neuroscience\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mathematical Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13408-018-0061-0","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Neuroscience","Score":null,"Total":0}
引用次数: 0

摘要

吸引子神经网络理论对我们理解空间记忆、陈述记忆和情节记忆的神经过程具有重要影响。许多理论研究侧重于吸引子的固有特性,如其结构和容量。而对于吸引子神经网络如何对外部输入做出反应却知之甚少,因为外部输入往往会带来与刺激相冲突的信息。在本文中,我们分析了一个吸引子神经网络在两个相互冲突的外部输入驱动下的行为。我们的重点是分析巨型图模型的突发特性,这是一种准连续吸引子网络,其中的位置细胞可灵活重组,以表示一个大的空间环境。在该模型中,系统从标准连续吸引子神经网络特有的 "赢者通吃 "模式急剧过渡到组合模式,在组合模式中,平衡活动模式结合了嵌入式吸引子状态,以响应相互冲突的外部输入。我们推导出一种数值测试方法,用于先验地确定系统的运行模式。然后,我们推导出一个线性变换,从具有数千个神经元的完整巨型图模型到具有类似定性行为的简化 2 单元模型。我们对简化模型的分析,以及将简化模型参数与巨型图相关联的明确表达式,阐明了组合模式出现的条件,以及在吸引子网络的相对强度和两个冲突输入的相对强度下,每种模式的动态变化。尽管我们关注的是一个特定的吸引子网络模型,但我们描述了一系列条件,在这些条件下,我们的分析可以应用于更一般的吸引子神经网络。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Analysis of an Attractor Neural Network's Response to Conflicting External Inputs.

Analysis of an Attractor Neural Network's Response to Conflicting External Inputs.

Analysis of an Attractor Neural Network's Response to Conflicting External Inputs.

Analysis of an Attractor Neural Network's Response to Conflicting External Inputs.

The theory of attractor neural networks has been influential in our understanding of the neural processes underlying spatial, declarative, and episodic memory. Many theoretical studies focus on the inherent properties of an attractor, such as its structure and capacity. Relatively little is known about how an attractor neural network responds to external inputs, which often carry conflicting information about a stimulus. In this paper we analyze the behavior of an attractor neural network driven by two conflicting external inputs. Our focus is on analyzing the emergent properties of the megamap model, a quasi-continuous attractor network in which place cells are flexibly recombined to represent a large spatial environment. In this model, the system shows a sharp transition from the winner-take-all mode, which is characteristic of standard continuous attractor neural networks, to a combinatorial mode in which the equilibrium activity pattern combines embedded attractor states in response to conflicting external inputs. We derive a numerical test for determining the operational mode of the system a priori. We then derive a linear transformation from the full megamap model with thousands of neurons to a reduced 2-unit model that has similar qualitative behavior. Our analysis of the reduced model and explicit expressions relating the parameters of the reduced model to the megamap elucidate the conditions under which the combinatorial mode emerges and the dynamics in each mode given the relative strength of the attractor network and the relative strength of the two conflicting inputs. Although we focus on a particular attractor network model, we describe a set of conditions under which our analysis can be applied to more general attractor neural networks.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Mathematical Neuroscience
Journal of Mathematical Neuroscience Neuroscience-Neuroscience (miscellaneous)
自引率
0.00%
发文量
0
审稿时长
13 weeks
期刊介绍: The Journal of Mathematical Neuroscience (JMN) publishes research articles on the mathematical modeling and analysis of all areas of neuroscience, i.e., the study of the nervous system and its dysfunctions. The focus is on using mathematics as the primary tool for elucidating the fundamental mechanisms responsible for experimentally observed behaviours in neuroscience at all relevant scales, from the molecular world to that of cognition. The aim is to publish work that uses advanced mathematical techniques to illuminate these questions. It publishes full length original papers, rapid communications and review articles. Papers that combine theoretical results supported by convincing numerical experiments are especially encouraged. Papers that introduce and help develop those new pieces of mathematical theory which are likely to be relevant to future studies of the nervous system in general and the human brain in particular are also welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信