Neural Computation最新文献

筛选
英文 中文
Bounded Rational Decision Networks With Belief Propagation 带信念传播的有界理性决策网络。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-12 DOI: 10.1162/neco_a_01719
Gerrit Schmid;Sebastian Gottwald;Daniel A. Braun
{"title":"Bounded Rational Decision Networks With Belief Propagation","authors":"Gerrit Schmid;Sebastian Gottwald;Daniel A. Braun","doi":"10.1162/neco_a_01719","DOIUrl":"10.1162/neco_a_01719","url":null,"abstract":"Complex information processing systems that are capable of a wide variety of tasks, such as the human brain, are composed of specialized units that collaborate and communicate with each other. An important property of such information processing networks is locality: there is no single global unit controlling the modules, but information is exchanged locally. Here, we consider a decision-theoretic approach to study networks of bounded rational decision makers that are allowed to specialize and communicate with each other. In contrast to previous work that has focused on feedforward communication between decision-making agents, we consider cyclical information processing paths allowing for back-and-forth communication. We adapt message-passing algorithms to suit this purpose, essentially allowing for local information flow between units and thus enabling circular dependency structures. We provide examples that show how repeated communication can increase performance given that each unit’s information processing capability is limited and that decision-making systems with too few or too many connections and feedback loops achieve suboptimal utility.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"76-127"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10810330","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computation With Sequences of Assemblies in a Model of the Brain 用大脑模型中的集合序列进行计算
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-12 DOI: 10.1162/neco_a_01720
Max Dabagia;Christos H. Papadimitriou;Santosh S. Vempala
{"title":"Computation With Sequences of Assemblies in a Model of the Brain","authors":"Max Dabagia;Christos H. Papadimitriou;Santosh S. Vempala","doi":"10.1162/neco_a_01720","DOIUrl":"10.1162/neco_a_01720","url":null,"abstract":"Even as machine learning exceeds human-level performance on many applications, the generality, robustness, and rapidity of the brain’s learning capabilities remain unmatched. How cognition arises from neural activity is the central open question in neuroscience, inextricable from the study of intelligence itself. A simple formal model of neural activity was proposed in Papadimitriou et al. (2020) and has been subsequently shown, through both mathematical proofs and simulations, to be capable of implementing certain simple cognitive operations via the creation and manipulation of assemblies of neurons. However, many intelligent behaviors rely on the ability to recognize, store, and manipulate temporal sequences of stimuli (planning, language, navigation, to list a few). Here we show that in the same model, sequential precedence can be captured naturally through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out. In particular, repeated presentation of a sequence of stimuli leads to the memorization of the sequence through corresponding neural assemblies: upon future presentation of any stimulus in the sequence, the corresponding assembly and its subsequent ones will be activated, one after the other, until the end of the sequence. If the stimulus sequence is presented to two brain areas simultaneously, a scaffolded representation is created, resulting in more efficient memorization and recall, in agreement with cognitive experiments. Finally, we show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences. Through an extension of this mechanism, the model can be shown to be capable of universal computation. Taken together, these results provide a concrete hypothesis for the basis of the brain’s remarkable abilities to compute and learn, with sequences playing a vital role.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"193-233"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computing With Residue Numbers in High-Dimensional Representation 用高维表示法计算残差数
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-12 DOI: 10.1162/neco_a_01723
Christopher J. Kymn;Denis Kleyko;E. Paxon Frady;Connor Bybee;Pentti Kanerva;Friedrich T. Sommer;Bruno A. Olshausen
{"title":"Computing With Residue Numbers in High-Dimensional Representation","authors":"Christopher J. Kymn;Denis Kleyko;E. Paxon Frady;Connor Bybee;Pentti Kanerva;Friedrich T. Sommer;Bruno A. Olshausen","doi":"10.1162/neco_a_01723","DOIUrl":"10.1162/neco_a_01723","url":null,"abstract":"We introduce residue hyperdimensional computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using resources that scale only logarithmically with the range, a vast improvement over previous methods. It also exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"1-37"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective Inference for Change Point Detection by Recurrent Neural Network 利用递归神经网络进行变化点检测的选择性推理
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-12 DOI: 10.1162/neco_a_01724
Tomohiro Shiraishi;Daiki Miwa;Vo Nguyen Le Duy;Ichiro Takeuchi
{"title":"Selective Inference for Change Point Detection by Recurrent Neural Network","authors":"Tomohiro Shiraishi;Daiki Miwa;Vo Nguyen Le Duy;Ichiro Takeuchi","doi":"10.1162/neco_a_01724","DOIUrl":"10.1162/neco_a_01724","url":null,"abstract":"In this study, we investigate the quantification of the statistical reliability of detected change points (CPs) in time series using a recurrent neural network (RNN). Thanks to its flexibility, RNN holds the potential to effectively identify CPs in time series characterized by complex dynamics. However, there is an increased risk of erroneously detecting random noise fluctuations as CPs. The primary goal of this study is to rigorously control the risk of false detections by providing theoretically valid p-values to the CPs detected by RNN. To achieve this, we introduce a novel method based on the framework of selective inference (SI). SI enables valid inferences by conditioning on the event of hypothesis selection, thus mitigating bias from generating and testing hypotheses on the same data. In this study, we apply an SI framework to RNN-based CP detection, where characterizing the complex process of RNN selecting CPs is our main technical challenge. We demonstrate the validity and effectiveness of the proposed method through artificial and real data experiments.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"160-192"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142666703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relating Human Error–Based Learning to Modern Deep RL Algorithms 将基于人类错误的学习与现代深度 RL 算法联系起来。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-12 DOI: 10.1162/neco_a_01721
Michele Garibbo;Casimir J. H. Ludwig;Nathan F. Lepora;Laurence Aitchison
{"title":"Relating Human Error–Based Learning to Modern Deep RL Algorithms","authors":"Michele Garibbo;Casimir J. H. Ludwig;Nathan F. Lepora;Laurence Aitchison","doi":"10.1162/neco_a_01721","DOIUrl":"10.1162/neco_a_01721","url":null,"abstract":"In human error–based learning, the size and direction of a scalar error (i.e., the “directed error”) are used to update future actions. Modern deep reinforcement learning (RL) methods perform a similar operation but in terms of scalar rewards. Despite this similarity, the relationship between action updates of deep RL and human error–based learning has yet to be investigated. Here, we systematically compare the three major families of deep RL algorithms to human error–based learning. We show that all three deep RL approaches are qualitatively different from human error–based learning, as assessed by a mirror-reversal perturbation experiment. To bridge this gap, we developed an alternative deep RL algorithm inspired by human error–based learning, model-based deterministic policy gradients (MB-DPG). We showed that MB-DPG captures human error–based learning under mirror-reversal and rotational perturbations and that MB-DPG learns faster than canonical model-free algorithms on complex arm-based reaching tasks, while being more robust to (forward) model misspecification than model-based RL.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"128-159"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realizing Synthetic Active Inference Agents, Part II: Variational Message Updates 实现合成主动推理代理,第二部分:变异信息更新。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-12 DOI: 10.1162/neco_a_01713
Thijs van de Laar;Magnus Koudahl;Bert de Vries
{"title":"Realizing Synthetic Active Inference Agents, Part II: Variational Message Updates","authors":"Thijs van de Laar;Magnus Koudahl;Bert de Vries","doi":"10.1162/neco_a_01713","DOIUrl":"10.1162/neco_a_01713","url":null,"abstract":"The free energy principle (FEP) describes (biological) agents as minimizing a variational free energy (FE) with respect to a generative model of their environment. Active inference (AIF) is a corollary of the FEP that describes how agents explore and exploit their environment by minimizing an expected FE objective. In two related papers, we describe a scalable, epistemic approach to synthetic AIF by message passing on free-form Forney-style factor graphs (FFGs). A companion paper (part I of this article; Koudahl et al., 2023) introduces a constrained FFG (CFFG) notation that visually represents (generalized) FE objectives for AIF. This article (part II) derives message-passing algorithms that minimize (generalized) FE objectives on a CFFG by variational calculus. A comparison between simulated Bethe and generalized FE agents illustrates how the message-passing approach to synthetic AIF induces epistemic behavior on a T-maze navigation task. Extension of the T-maze simulation to learning goal statistics and a multiagent bargaining setting illustrate how this approach encourages reuse of nodes and updates in alternative settings. With a full message-passing account of synthetic AIF agents, it becomes possible to derive and reuse message updates across models and move closer to industrial applications of synthetic AIF.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 1","pages":"38-75"},"PeriodicalIF":2.7,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fast Algorithm for the Real-Valued Combinatorial Pure Exploration of the Multi-Armed Bandit.
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-02 DOI: 10.1162/neco_a_01728
Shintaro Nakamura, Masashi Sugiyama
{"title":"A Fast Algorithm for the Real-Valued Combinatorial Pure Exploration of the Multi-Armed Bandit.","authors":"Shintaro Nakamura, Masashi Sugiyama","doi":"10.1162/neco_a_01728","DOIUrl":"https://doi.org/10.1162/neco_a_01728","url":null,"abstract":"<p><p>We study the real-valued combinatorial pure exploration problem in the stochastic multi-armed bandit (R-CPE-MAB). We study the case where the size of the action set is polynomial with respect to the number of arms. In such a case, the R-CPE-MAB can be seen as a special case of the so-called transductive linear bandits. We introduce the combinatorial gap-based exploration (CombGapE) algorithm, whose sample complexity upper-bound-matches the lower bound up to a problem-dependent constant factor. We numerically show that the CombGapE algorithm outperforms existing methods significantly in both synthetic and real-world data sets.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-17"},"PeriodicalIF":2.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Compressive Power of Autoencoders With Linear and ReLU Activation Functions.
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-02 DOI: 10.1162/neco_a_01729
Liangjie Sun, Chenyao Wu, Wai-Ki Ching, Tatsuya Akutsu
{"title":"On the Compressive Power of Autoencoders With Linear and ReLU Activation Functions.","authors":"Liangjie Sun, Chenyao Wu, Wai-Ki Ching, Tatsuya Akutsu","doi":"10.1162/neco_a_01729","DOIUrl":"https://doi.org/10.1162/neco_a_01729","url":null,"abstract":"<p><p>In this letter, we mainly study the depth and width of autoencoders consisting of rectified linear unit (ReLU) activation functions. An autoencoder is a layered neural network consisting of an encoder, which compresses an input vector to a lower-dimensional vector, and a decoder, which transforms the low-dimensional vector back to the original input vector exactly (or approximately). In a previous study, Melkman et al. (2023) studied the depth and width of autoencoders using linear threshold activation functions with binary input and output vectors. We show that similar theoretical results hold if autoencoders using ReLU activation functions with real input and output vectors are used. Furthermore, we show that it is possible to compress input vectors to one-dimensional vectors using ReLU activation functions, although the size of compressed vectors is trivially Ω(log n) for autoencoders with linear threshold activation functions, where n is the number of input vectors. We also study the cases of linear activation functions. The results suggest that the compressive power of autoencoders using linear activation functions is considerably limited compared with those using ReLU activation functions.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-25"},"PeriodicalIF":2.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning in Associative Networks Through Pavlovian Dynamics.
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-12-02 DOI: 10.1162/neco_a_01730
Daniele Lotito, Miriam Aquaro, Chiara Marullo
{"title":"Learning in Associative Networks Through Pavlovian Dynamics.","authors":"Daniele Lotito, Miriam Aquaro, Chiara Marullo","doi":"10.1162/neco_a_01730","DOIUrl":"https://doi.org/10.1162/neco_a_01730","url":null,"abstract":"<p><p>Hebbian learning theory is rooted in Pavlov's classical conditioning While mathematical models of the former have been proposed and studied in the past decades, especially in spin glass theory, only recently has it been numerically shown that it is possible to write neural and synaptic dynamics that mirror Pavlov conditioning mechanisms and also give rise to synaptic weights that correspond to the Hebbian learning rule. In this letter we show that the same dynamics can be derived with equilibrium statistical mechanics tools and basic and motivated modeling assumptions. Then we show how to study the resulting system of coupled stochastic differential equations assuming the reasonable separation of neural and synaptic timescale. In particular, we analytically demonstrate that this synaptic evolution converges to the Hebbian learning rule in various settings and compute the variance of the stochastic process. Finally, drawing from evidence on pure memory reinforcement during sleep stages, we show how the proposed model can simulate neural networks that undergo sleep-associated memory consolidation processes, thereby proving the compatibility of Pavlovian learning with dreaming mechanisms.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-33"},"PeriodicalIF":2.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fast Algorithm for All-Pairs-Shortest-Paths Suitable for Neural Networks 适合神经网络的全对最短路径快速算法
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2024-11-19 DOI: 10.1162/neco_a_01716
Zeyu Jing;Markus Meister
{"title":"A Fast Algorithm for All-Pairs-Shortest-Paths Suitable for Neural Networks","authors":"Zeyu Jing;Markus Meister","doi":"10.1162/neco_a_01716","DOIUrl":"10.1162/neco_a_01716","url":null,"abstract":"Given a directed graph of nodes and edges connecting them, a common problem is to find the shortest path between any two nodes. Here we show that the shortest path distances can be found by a simple matrix inversion: if the edges are given by the adjacency matrix Aij, then with a suitably small value of γ, the shortest path distances are Dij=ceil(logγ[(I-γA)-1]ij).We derive several graph-theoretic bounds on the value of γ and explore its useful range with numerics on different graph types. Even when the distance function is not globally accurate across the entire graph, it still works locally to instruct pursuit of the shortest path. In this mode, it also extends to weighted graphs with positive edge weights. For a wide range of dense graphs, this distance function is computationally faster than the best available alternative. Finally, we show that this method leads naturally to a neural network solution of the all-pairs-shortest-path problem.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"36 12","pages":"2710-2733"},"PeriodicalIF":2.7,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信