神经网络中的抽象

Nancy Lynch
{"title":"神经网络中的抽象","authors":"Nancy Lynch","doi":"arxiv-2408.02125","DOIUrl":null,"url":null,"abstract":"We show how brain networks, modeled as Spiking Neural Networks, can be viewed\nat different levels of abstraction. Lower levels include complications such as\nfailures of neurons and edges. Higher levels are more abstract, making\nsimplifying assumptions to avoid these complications. We show precise\nrelationships between executions of networks at different levels, which enables\nus to understand the behavior of lower-level networks in terms of the behavior\nof higher-level networks. We express our results using two abstract networks, A1 and A2, one to express\nfiring guarantees and the other to express non-firing guarantees, and one\ndetailed network D. The abstract networks contain reliable neurons and edges,\nwhereas the detailed network has neurons and edges that may fail, subject to\nsome constraints. Here we consider just initial stopping failures. To define\nthese networks, we begin with abstract network A1 and modify it systematically\nto obtain the other two networks. To obtain A2, we simply lower the firing\nthresholds of the neurons. To obtain D, we introduce failures of neurons and\nedges, and incorporate redundancy in the neurons and edges in order to\ncompensate for the failures. We also define corresponding inputs for the\nnetworks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D\nand the other relating corresponding executions of A2 and D. Together, these\ngive both firing and non-firing guarantees for the detailed network D. We also\ngive a third theorem, relating the effects of D on an external reliable\nactuator neuron to the effects of the abstract networks on the same actuator\nneuron.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Abstraction in Neural Networks\",\"authors\":\"Nancy Lynch\",\"doi\":\"arxiv-2408.02125\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We show how brain networks, modeled as Spiking Neural Networks, can be viewed\\nat different levels of abstraction. Lower levels include complications such as\\nfailures of neurons and edges. Higher levels are more abstract, making\\nsimplifying assumptions to avoid these complications. We show precise\\nrelationships between executions of networks at different levels, which enables\\nus to understand the behavior of lower-level networks in terms of the behavior\\nof higher-level networks. We express our results using two abstract networks, A1 and A2, one to express\\nfiring guarantees and the other to express non-firing guarantees, and one\\ndetailed network D. The abstract networks contain reliable neurons and edges,\\nwhereas the detailed network has neurons and edges that may fail, subject to\\nsome constraints. Here we consider just initial stopping failures. To define\\nthese networks, we begin with abstract network A1 and modify it systematically\\nto obtain the other two networks. To obtain A2, we simply lower the firing\\nthresholds of the neurons. To obtain D, we introduce failures of neurons and\\nedges, and incorporate redundancy in the neurons and edges in order to\\ncompensate for the failures. We also define corresponding inputs for the\\nnetworks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D\\nand the other relating corresponding executions of A2 and D. Together, these\\ngive both firing and non-firing guarantees for the detailed network D. We also\\ngive a third theorem, relating the effects of D on an external reliable\\nactuator neuron to the effects of the abstract networks on the same actuator\\nneuron.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.02125\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.02125","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们展示了以尖峰神经网络(Spiking Neural Networks)为模型的大脑网络如何在不同的抽象层级上进行观察。较低层次的抽象包括神经元和边缘失效等复杂情况。较高层次的抽象程度更高,可以做出简化假设以避免这些复杂性。我们展示了不同层次网络执行之间的精确关系,这使我们能够从高层网络的行为来理解低层网络的行为。我们使用两个抽象网络 A1 和 A2(一个用于表达触发保证,另一个用于表达非触发保证)以及一个详细网络 D 来表达我们的结果。抽象网络包含可靠的神经元和边,而详细网络则包含可能失效的神经元和边,并受到一些约束条件的限制。这里我们只考虑初始停止失败。为了定义这些网络,我们从抽象网络 A1 开始,并对其进行系统修改,以获得其他两个网络。为了得到 A2,我们只需降低神经元的发射阈值。为了得到 D,我们引入了神经元和边的失效,并在神经元和边中加入冗余以补偿失效。我们还定义了网络的相应输入和网络的相应执行。我们证明了两个主要定理,一个是关于 A1 和 D 的相应执行的定理,另一个是关于 A2 和 D 的相应执行的定理。我们还给出了第三个定理,即关于 D 对外部可靠执行器神经元的影响和抽象网络对同一执行器神经元的影响的定理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Abstraction in Neural Networks
We show how brain networks, modeled as Spiking Neural Networks, can be viewed at different levels of abstraction. Lower levels include complications such as failures of neurons and edges. Higher levels are more abstract, making simplifying assumptions to avoid these complications. We show precise relationships between executions of networks at different levels, which enables us to understand the behavior of lower-level networks in terms of the behavior of higher-level networks. We express our results using two abstract networks, A1 and A2, one to express firing guarantees and the other to express non-firing guarantees, and one detailed network D. The abstract networks contain reliable neurons and edges, whereas the detailed network has neurons and edges that may fail, subject to some constraints. Here we consider just initial stopping failures. To define these networks, we begin with abstract network A1 and modify it systematically to obtain the other two networks. To obtain A2, we simply lower the firing thresholds of the neurons. To obtain D, we introduce failures of neurons and edges, and incorporate redundancy in the neurons and edges in order to compensate for the failures. We also define corresponding inputs for the networks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D and the other relating corresponding executions of A2 and D. Together, these give both firing and non-firing guarantees for the detailed network D. We also give a third theorem, relating the effects of D on an external reliable actuator neuron to the effects of the abstract networks on the same actuator neuron.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信