{"title":"神经网络中的抽象","authors":"Nancy Lynch","doi":"arxiv-2408.02125","DOIUrl":null,"url":null,"abstract":"We show how brain networks, modeled as Spiking Neural Networks, can be viewed\nat different levels of abstraction. Lower levels include complications such as\nfailures of neurons and edges. Higher levels are more abstract, making\nsimplifying assumptions to avoid these complications. We show precise\nrelationships between executions of networks at different levels, which enables\nus to understand the behavior of lower-level networks in terms of the behavior\nof higher-level networks. We express our results using two abstract networks, A1 and A2, one to express\nfiring guarantees and the other to express non-firing guarantees, and one\ndetailed network D. The abstract networks contain reliable neurons and edges,\nwhereas the detailed network has neurons and edges that may fail, subject to\nsome constraints. Here we consider just initial stopping failures. To define\nthese networks, we begin with abstract network A1 and modify it systematically\nto obtain the other two networks. To obtain A2, we simply lower the firing\nthresholds of the neurons. To obtain D, we introduce failures of neurons and\nedges, and incorporate redundancy in the neurons and edges in order to\ncompensate for the failures. We also define corresponding inputs for the\nnetworks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D\nand the other relating corresponding executions of A2 and D. Together, these\ngive both firing and non-firing guarantees for the detailed network D. We also\ngive a third theorem, relating the effects of D on an external reliable\nactuator neuron to the effects of the abstract networks on the same actuator\nneuron.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Abstraction in Neural Networks\",\"authors\":\"Nancy Lynch\",\"doi\":\"arxiv-2408.02125\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We show how brain networks, modeled as Spiking Neural Networks, can be viewed\\nat different levels of abstraction. Lower levels include complications such as\\nfailures of neurons and edges. Higher levels are more abstract, making\\nsimplifying assumptions to avoid these complications. We show precise\\nrelationships between executions of networks at different levels, which enables\\nus to understand the behavior of lower-level networks in terms of the behavior\\nof higher-level networks. We express our results using two abstract networks, A1 and A2, one to express\\nfiring guarantees and the other to express non-firing guarantees, and one\\ndetailed network D. The abstract networks contain reliable neurons and edges,\\nwhereas the detailed network has neurons and edges that may fail, subject to\\nsome constraints. Here we consider just initial stopping failures. To define\\nthese networks, we begin with abstract network A1 and modify it systematically\\nto obtain the other two networks. To obtain A2, we simply lower the firing\\nthresholds of the neurons. To obtain D, we introduce failures of neurons and\\nedges, and incorporate redundancy in the neurons and edges in order to\\ncompensate for the failures. We also define corresponding inputs for the\\nnetworks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D\\nand the other relating corresponding executions of A2 and D. Together, these\\ngive both firing and non-firing guarantees for the detailed network D. We also\\ngive a third theorem, relating the effects of D on an external reliable\\nactuator neuron to the effects of the abstract networks on the same actuator\\nneuron.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.02125\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.02125","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
我们展示了以尖峰神经网络(Spiking Neural Networks)为模型的大脑网络如何在不同的抽象层级上进行观察。较低层次的抽象包括神经元和边缘失效等复杂情况。较高层次的抽象程度更高,可以做出简化假设以避免这些复杂性。我们展示了不同层次网络执行之间的精确关系,这使我们能够从高层网络的行为来理解低层网络的行为。我们使用两个抽象网络 A1 和 A2(一个用于表达触发保证,另一个用于表达非触发保证)以及一个详细网络 D 来表达我们的结果。抽象网络包含可靠的神经元和边,而详细网络则包含可能失效的神经元和边,并受到一些约束条件的限制。这里我们只考虑初始停止失败。为了定义这些网络,我们从抽象网络 A1 开始,并对其进行系统修改,以获得其他两个网络。为了得到 A2,我们只需降低神经元的发射阈值。为了得到 D,我们引入了神经元和边的失效,并在神经元和边中加入冗余以补偿失效。我们还定义了网络的相应输入和网络的相应执行。我们证明了两个主要定理,一个是关于 A1 和 D 的相应执行的定理,另一个是关于 A2 和 D 的相应执行的定理。我们还给出了第三个定理,即关于 D 对外部可靠执行器神经元的影响和抽象网络对同一执行器神经元的影响的定理。
We show how brain networks, modeled as Spiking Neural Networks, can be viewed
at different levels of abstraction. Lower levels include complications such as
failures of neurons and edges. Higher levels are more abstract, making
simplifying assumptions to avoid these complications. We show precise
relationships between executions of networks at different levels, which enables
us to understand the behavior of lower-level networks in terms of the behavior
of higher-level networks. We express our results using two abstract networks, A1 and A2, one to express
firing guarantees and the other to express non-firing guarantees, and one
detailed network D. The abstract networks contain reliable neurons and edges,
whereas the detailed network has neurons and edges that may fail, subject to
some constraints. Here we consider just initial stopping failures. To define
these networks, we begin with abstract network A1 and modify it systematically
to obtain the other two networks. To obtain A2, we simply lower the firing
thresholds of the neurons. To obtain D, we introduce failures of neurons and
edges, and incorporate redundancy in the neurons and edges in order to
compensate for the failures. We also define corresponding inputs for the
networks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D
and the other relating corresponding executions of A2 and D. Together, these
give both firing and non-firing guarantees for the detailed network D. We also
give a third theorem, relating the effects of D on an external reliable
actuator neuron to the effects of the abstract networks on the same actuator
neuron.