arXiv - CS - Neural and Evolutionary Computing最新文献

筛选
英文 中文
Connective Viewpoints of Signal-to-Noise Diffusion Models 信噪扩散模型的关联观点
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-08 DOI: arxiv-2408.04221
Khanh Doan, Long Tung Vuong, Tuan Nguyen, Anh Tuan Bui, Quyen Tran, Thanh-Toan Do, Dinh Phung, Trung Le
{"title":"Connective Viewpoints of Signal-to-Noise Diffusion Models","authors":"Khanh Doan, Long Tung Vuong, Tuan Nguyen, Anh Tuan Bui, Quyen Tran, Thanh-Toan Do, Dinh Phung, Trung Le","doi":"arxiv-2408.04221","DOIUrl":"https://doi.org/arxiv-2408.04221","url":null,"abstract":"Diffusion models (DM) have become fundamental components of generative\u0000models, excelling across various domains such as image creation, audio\u0000generation, and complex data interpolation. Signal-to-Noise diffusion models\u0000constitute a diverse family covering most state-of-the-art diffusion models.\u0000While there have been several attempts to study Signal-to-Noise (S2N) diffusion\u0000models from various perspectives, there remains a need for a comprehensive\u0000study connecting different viewpoints and exploring new perspectives. In this\u0000study, we offer a comprehensive perspective on noise schedulers, examining\u0000their role through the lens of the signal-to-noise ratio (SNR) and its\u0000connections to information theory. Building upon this framework, we have\u0000developed a generalized backward equation to enhance the performance of the\u0000inference process.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Theoretical Advantage of Multiobjective Evolutionary Algorithms for Problems with Different Degrees of Conflict 针对不同冲突程度问题的多目标进化算法的理论优势
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-08 DOI: arxiv-2408.04207
Weijie Zheng
{"title":"Theoretical Advantage of Multiobjective Evolutionary Algorithms for Problems with Different Degrees of Conflict","authors":"Weijie Zheng","doi":"arxiv-2408.04207","DOIUrl":"https://doi.org/arxiv-2408.04207","url":null,"abstract":"The field of multiobjective evolutionary algorithms (MOEAs) often emphasizes\u0000its popularity for optimization problems with conflicting objectives. However,\u0000it is still theoretically unknown how MOEAs perform for different degrees of\u0000conflict, even for no conflicts, compared with typical approaches outside this\u0000field. As the first step to tackle this question, we propose the OneMaxMin$_k$\u0000benchmark class with the degree of the conflict $kin[0..n]$, a generalized\u0000variant of COCZ and OneMinMax. Two typical non-MOEA approaches, scalarization\u0000(weighted-sum approach) and $epsilon$-constraint approach, are considered. We\u0000prove that for any set of weights, the set of optima found by scalarization\u0000approach cannot cover the full Pareto front. Although the set of the optima of\u0000constrained problems constructed via $epsilon$-constraint approach can cover\u0000the full Pareto front, the general used ways (via exterior or nonparameter\u0000penalty functions) to solve such constrained problems encountered difficulties.\u0000The nonparameter penalty function way cannot construct the set of optima whose\u0000function values are the Pareto front, and the exterior way helps (with expected\u0000runtime of $O(nln n)$ for the randomized local search algorithm for reaching\u0000any Pareto front point) but with careful settings of $epsilon$ and $r$\u0000($r>1/(epsilon+1-lceil epsilon rceil)$). In constrast, the generally analyzed MOEAs can efficiently solve\u0000OneMaxMin$_k$ without above careful designs. We prove that (G)SEMO, MOEA/D,\u0000NSGA-II, and SMS-EMOA can cover the full Pareto front in $O(max{k,1}nln n)$\u0000expected number of function evaluations, which is the same asymptotic runtime\u0000as the exterior way in $epsilon$-constraint approach with careful settings. As\u0000a side result, our results also give the performance analysis of solving a\u0000constrained problem via multiobjective way.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics ParetoTracker:通过可视化分析了解多目标进化算法中的种群动态
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-08 DOI: arxiv-2408.04539
Zherui Zhang, Fan Yang, Ran Cheng, Yuxin Ma
{"title":"ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics","authors":"Zherui Zhang, Fan Yang, Ran Cheng, Yuxin Ma","doi":"arxiv-2408.04539","DOIUrl":"https://doi.org/arxiv-2408.04539","url":null,"abstract":"Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful\u0000tools for solving complex optimization problems characterized by multiple,\u0000often conflicting, objectives. While advancements have been made in\u0000computational efficiency as well as diversity and convergence of solutions, a\u0000critical challenge persists: the internal evolutionary mechanisms are opaque to\u0000human users. Drawing upon the successes of explainable AI in explaining complex\u0000algorithms and models, we argue that the need to understand the underlying\u0000evolutionary operators and population dynamics within MOEAs aligns well with a\u0000visual analytics paradigm. This paper introduces ParetoTracker, a visual\u0000analytics framework designed to support the comprehension and inspection of\u0000population dynamics in the evolutionary processes of MOEAs. Informed by\u0000preliminary literature review and expert interviews, the framework establishes\u0000a multi-level analysis scheme, which caters to user engagement and exploration\u0000ranging from examining overall trends in performance metrics to conducting\u0000fine-grained inspections of evolutionary operations. In contrast to\u0000conventional practices that require manual plotting of solutions for each\u0000generation, ParetoTracker facilitates the examination of temporal trends and\u0000dynamics across consecutive generations in an integrated visual interface. The\u0000effectiveness of the framework is demonstrated through case studies and expert\u0000interviews focused on widely adopted benchmark optimization problems.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Solving QUBO on the Loihi 2 Neuromorphic Processor 在 Loihi 2 神经形态处理器上解决 QUBO 问题
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-06 DOI: arxiv-2408.03076
Alessandro Pierro, Philipp Stratmann, Gabriel Andres Fonseca Guerra, Sumedh Risbud, Timothy Shea, Ashish Rao Mangalore, Andreas Wild
{"title":"Solving QUBO on the Loihi 2 Neuromorphic Processor","authors":"Alessandro Pierro, Philipp Stratmann, Gabriel Andres Fonseca Guerra, Sumedh Risbud, Timothy Shea, Ashish Rao Mangalore, Andreas Wild","doi":"arxiv-2408.03076","DOIUrl":"https://doi.org/arxiv-2408.03076","url":null,"abstract":"In this article, we describe an algorithm for solving Quadratic Unconstrained\u0000Binary Optimization problems on the Intel Loihi 2 neuromorphic processor. The\u0000solver is based on a hardware-aware fine-grained parallel simulated annealing\u0000algorithm developed for Intel's neuromorphic research chip Loihi 2. Preliminary\u0000results show that our approach can generate feasible solutions in as little as\u00001 ms and up to 37x more energy efficient compared to two baseline solvers\u0000running on a CPU. These advantages could be especially relevant for size-,\u0000weight-, and power-constrained edge computing applications.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synaptic Modulation using Interspike Intervals Increases Energy Efficiency of Spiking Neural Networks 利用间期突触调制提高尖峰神经网络的能效
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-06 DOI: arxiv-2408.02961
Dylan Adams, Magda Zajaczkowska, Ashiq Anjum, Andrea Soltoggio, Shirin Dora
{"title":"Synaptic Modulation using Interspike Intervals Increases Energy Efficiency of Spiking Neural Networks","authors":"Dylan Adams, Magda Zajaczkowska, Ashiq Anjum, Andrea Soltoggio, Shirin Dora","doi":"arxiv-2408.02961","DOIUrl":"https://doi.org/arxiv-2408.02961","url":null,"abstract":"Despite basic differences between Spiking Neural Networks (SNN) and\u0000Artificial Neural Networks (ANN), most research on SNNs involve adapting\u0000ANN-based methods for SNNs. Pruning (dropping connections) and quantization\u0000(reducing precision) are often used to improve energy efficiency of SNNs. These\u0000methods are very effective for ANNs whose energy needs are determined by\u0000signals transmitted on synapses. However, the event-driven paradigm in SNNs\u0000implies that energy is consumed by spikes. In this paper, we propose a new\u0000synapse model whose weights are modulated by Interspike Intervals (ISI) i.e.\u0000time difference between two spikes. SNNs composed of this synapse model, termed\u0000ISI Modulated SNNs (IMSNN), can use gradient descent to estimate how the ISI of\u0000a neuron changes after updating its synaptic parameters. A higher ISI implies\u0000fewer spikes and vice-versa. The learning algorithm for IMSNNs exploits this\u0000information to selectively propagate gradients such that learning is achieved\u0000by increasing the ISIs resulting in a network that generates fewer spikes. The\u0000performance of IMSNNs with dense and convolutional layers have been evaluated\u0000in terms of classification accuracy and the number of spikes using the MNIST\u0000and FashionMNIST datasets. The performance comparison with conventional SNNs\u0000shows that IMSNNs exhibit upto 90% reduction in the number of spikes while\u0000maintaining similar classification accuracy.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"137 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PENDRAM: Enabling High-Performance and Energy-Efficient Processing of Deep Neural Networks through a Generalized DRAM Data Mapping Policy PENDRAM:通过通用 DRAM 数据映射策略实现高性能、高能效的深度神经网络处理
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-05 DOI: arxiv-2408.02412
Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
{"title":"PENDRAM: Enabling High-Performance and Energy-Efficient Processing of Deep Neural Networks through a Generalized DRAM Data Mapping Policy","authors":"Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique","doi":"arxiv-2408.02412","DOIUrl":"https://doi.org/arxiv-2408.02412","url":null,"abstract":"Convolutional Neural Networks (CNNs), a prominent type of Deep Neural\u0000Networks (DNNs), have emerged as a state-of-the-art solution for solving\u0000machine learning tasks. To improve the performance and energy efficiency of CNN\u0000inference, the employment of specialized hardware accelerators is prevalent.\u0000However, CNN accelerators still face performance- and energy-efficiency\u0000challenges due to high off-chip memory (DRAM) access latency and energy, which\u0000are especially crucial for latency- and energy-constrained embedded\u0000applications. Moreover, different DRAM architectures have different profiles of\u0000access latency and energy, thus making it challenging to optimize them for high\u0000performance and energy-efficient CNN accelerators. To address this, we present\u0000PENDRAM, a novel design space exploration methodology that enables\u0000high-performance and energy-efficient CNN acceleration through a generalized\u0000DRAM data mapping policy. Specifically, it explores the impact of different\u0000DRAM data mapping policies and DRAM architectures across different CNN\u0000partitioning and scheduling schemes on the DRAM access latency and energy, then\u0000identifies the pareto-optimal design choices. The experimental results show\u0000that our DRAM data mapping policy improves the energy-delay-product of DRAM\u0000accesses in the CNN accelerator over other mapping policies by up to 96%. In\u0000this manner, our PENDRAM methodology offers high-performance and\u0000energy-efficient CNN acceleration under any given DRAM architectures for\u0000diverse embedded AI applications.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"84 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MARCO: A Memory-Augmented Reinforcement Framework for Combinatorial Optimization MARCO:用于组合优化的记忆增强强化框架
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-05 DOI: arxiv-2408.02207
Andoni I. Garmendia, Quentin Cappart, Josu Ceberio, Alexander Mendiburu
{"title":"MARCO: A Memory-Augmented Reinforcement Framework for Combinatorial Optimization","authors":"Andoni I. Garmendia, Quentin Cappart, Josu Ceberio, Alexander Mendiburu","doi":"arxiv-2408.02207","DOIUrl":"https://doi.org/arxiv-2408.02207","url":null,"abstract":"Neural Combinatorial Optimization (NCO) is an emerging domain where deep\u0000learning techniques are employed to address combinatorial optimization problems\u0000as a standalone solver. Despite their potential, existing NCO methods often\u0000suffer from inefficient search space exploration, frequently leading to local\u0000optima entrapment or redundant exploration of previously visited states. This\u0000paper introduces a versatile framework, referred to as Memory-Augmented\u0000Reinforcement for Combinatorial Optimization (MARCO), that can be used to\u0000enhance both constructive and improvement methods in NCO through an innovative\u0000memory module. MARCO stores data collected throughout the optimization\u0000trajectory and retrieves contextually relevant information at each state. This\u0000way, the search is guided by two competing criteria: making the best decision\u0000in terms of the quality of the solution and avoiding revisiting already\u0000explored solutions. This approach promotes a more efficient use of the\u0000available optimization budget. Moreover, thanks to the parallel nature of NCO\u0000models, several search threads can run simultaneously, all sharing the same\u0000memory module, enabling an efficient collaborative exploration. Empirical\u0000evaluations, carried out on the maximum cut, maximum independent set and\u0000travelling salesman problems, reveal that the memory module effectively\u0000increases the exploration, enabling the model to discover diverse,\u0000higher-quality solutions. MARCO achieves good performance in a low\u0000computational cost, establishing a promising new direction in the field of NCO.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms 关于在进化算法中使用大型语言模型进行超参数调整的研究
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-05 DOI: arxiv-2408.02451
Leonardo Lucio Custode, Fabio Caraffini, Anil Yaman, Giovanni Iacca
{"title":"An investigation on the use of Large Language Models for hyperparameter tuning in Evolutionary Algorithms","authors":"Leonardo Lucio Custode, Fabio Caraffini, Anil Yaman, Giovanni Iacca","doi":"arxiv-2408.02451","DOIUrl":"https://doi.org/arxiv-2408.02451","url":null,"abstract":"Hyperparameter optimization is a crucial problem in Evolutionary Computation.\u0000In fact, the values of the hyperparameters directly impact the trajectory taken\u0000by the optimization process, and their choice requires extensive reasoning by\u0000human operators. Although a variety of self-adaptive Evolutionary Algorithms\u0000have been proposed in the literature, no definitive solution has been found. In\u0000this work, we perform a preliminary investigation to automate the reasoning\u0000process that leads to the choice of hyperparameter values. We employ two\u0000open-source Large Language Models (LLMs), namely Llama2-70b and Mixtral, to\u0000analyze the optimization logs online and provide novel real-time hyperparameter\u0000recommendations. We study our approach in the context of step-size adaptation\u0000for (1+1)-ES. The results suggest that LLMs can be an effective method for\u0000optimizing hyperparameters in Evolution Strategies, encouraging further\u0000research in this direction.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Landscape-Aware Differential Evolution for Multimodal Optimization Problems 多模式优化问题的景观感知差分进化论
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-05 DOI: arxiv-2408.02340
Guo-Yun Lin, Zong-Gan Chen, Yuncheng Jiang, Zhi-Hui Zhan, Jun Zhang
{"title":"A Landscape-Aware Differential Evolution for Multimodal Optimization Problems","authors":"Guo-Yun Lin, Zong-Gan Chen, Yuncheng Jiang, Zhi-Hui Zhan, Jun Zhang","doi":"arxiv-2408.02340","DOIUrl":"https://doi.org/arxiv-2408.02340","url":null,"abstract":"How to simultaneously locate multiple global peaks and achieve certain\u0000accuracy on the found peaks are two key challenges in solving multimodal\u0000optimization problems (MMOPs). In this paper, a landscape-aware differential\u0000evolution (LADE) algorithm is proposed for MMOPs, which utilizes landscape\u0000knowledge to maintain sufficient diversity and provide efficient search\u0000guidance. In detail, the landscape knowledge is efficiently utilized in the\u0000following three aspects. First, a landscape-aware peak exploration helps each\u0000individual evolve adaptively to locate a peak and simulates the regions of the\u0000found peaks according to search history to avoid an individual locating a found\u0000peak. Second, a landscape-aware peak distinction distinguishes whether an\u0000individual locates a new global peak, a new local peak, or a found peak.\u0000Accuracy refinement can thus only be conducted on the global peaks to enhance\u0000the search efficiency. Third, a landscape-aware reinitialization specifies the\u0000initial position of an individual adaptively according to the distribution of\u0000the found peaks, which helps explore more peaks. The experiments are conducted\u0000on 20 widely-used benchmark MMOPs. Experimental results show that LADE obtains\u0000generally better or competitive performance compared with seven well-performed\u0000algorithms proposed recently and four winner algorithms in the IEEE CEC\u0000competitions for multimodal optimization.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abstraction in Neural Networks 神经网络中的抽象
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-04 DOI: arxiv-2408.02125
Nancy Lynch
{"title":"Abstraction in Neural Networks","authors":"Nancy Lynch","doi":"arxiv-2408.02125","DOIUrl":"https://doi.org/arxiv-2408.02125","url":null,"abstract":"We show how brain networks, modeled as Spiking Neural Networks, can be viewed\u0000at different levels of abstraction. Lower levels include complications such as\u0000failures of neurons and edges. Higher levels are more abstract, making\u0000simplifying assumptions to avoid these complications. We show precise\u0000relationships between executions of networks at different levels, which enables\u0000us to understand the behavior of lower-level networks in terms of the behavior\u0000of higher-level networks. We express our results using two abstract networks, A1 and A2, one to express\u0000firing guarantees and the other to express non-firing guarantees, and one\u0000detailed network D. The abstract networks contain reliable neurons and edges,\u0000whereas the detailed network has neurons and edges that may fail, subject to\u0000some constraints. Here we consider just initial stopping failures. To define\u0000these networks, we begin with abstract network A1 and modify it systematically\u0000to obtain the other two networks. To obtain A2, we simply lower the firing\u0000thresholds of the neurons. To obtain D, we introduce failures of neurons and\u0000edges, and incorporate redundancy in the neurons and edges in order to\u0000compensate for the failures. We also define corresponding inputs for the\u0000networks, and corresponding executions of the networks. We prove two main theorems, one relating corresponding executions of A1 and D\u0000and the other relating corresponding executions of A2 and D. Together, these\u0000give both firing and non-firing guarantees for the detailed network D. We also\u0000give a third theorem, relating the effects of D on an external reliable\u0000actuator neuron to the effects of the abstract networks on the same actuator\u0000neuron.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141945590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信