{"title":"Neural Code Translation With LIF Neuron Microcircuits","authors":"Ville Karlsson;Joni Kämäräinen","doi":"10.1162/neco_a_01754","DOIUrl":"10.1162/neco_a_01754","url":null,"abstract":"Spiking neural networks (SNNs) provide an energy-efficient alternative to traditional artificial neural networks, leveraging diverse neural encoding schemes such as rate, time-to-first-spike (TTFS), and population-based binary codes. Each encoding method offers distinct advantages: TTFS enables rapid and precise transmission with minimal energy use, rate encoding provides robust signal representation, and binary population encoding aligns well with digital hardware implementations. This letter introduces a set of neural microcircuits based on leaky integrate-and-fire neurons that enable translation between these encoding schemes. We propose two applications showcasing the utility of these microcircuits. First, we demonstrate a number comparison operation that significantly reduces spike transmission by switching from rate to TTFS encoding. Second, we present a high-bandwidth neural transmitter capable of encoding and transmitting binary population-encoded data through a single axon and reconstructing it at the target site. Additionally, we conduct a detailed analysis of these microcircuits, providing quantitative metrics to assess their efficiency in terms of neuron count, synaptic complexity, spike overhead, and runtime. Our findings highlight the potential of LIF neuron microcircuits in computational neuroscience and neuromorphic computing, offering a pathway to more interpretable and efficient SNN designs.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1124-1153"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144046411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamics and Bifurcation Structure of a Mean-Field Model of Adaptive Exponential Integrate-and-Fire Networks","authors":"Lionel Kusch;Damien Depannemaecker;Alain Destexhe;Viktor Jirsa","doi":"10.1162/neco_a_01758","DOIUrl":"10.1162/neco_a_01758","url":null,"abstract":"The study of brain activity spans diverse scales and levels of description and requires the development of computational models alongside experimental investigations to explore integrations across scales. The high dimensionality of spiking networks presents challenges for understanding their dynamics. To tackle this, a mean-field formulation offers a potential approach for dimensionality reduction while retaining essential elements. Here, we focus on a previously developed mean-field model of adaptive exponential integrate and fire (AdEx) networks used in various research work. We observe qualitative similarities in the bifurcation structure but quantitative differences in mean firing rates between the mean-field model and AdEx spiking network simulations. Even if the mean-field model does not accurately predict phase shift during transients and oscillatory input, it generally captures the qualitative dynamics of the spiking network’s response to both constant and varying inputs. Finally, we offer an overview of the dynamical properties of the AdExMF to assist future users in interpreting their results of simulations.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1102-1123"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Memory States From Almost Nothing: Representing and Computing in a Nonassociative Algebra","authors":"Stefan Reimann","doi":"10.1162/neco_a_01755","DOIUrl":"10.1162/neco_a_01755","url":null,"abstract":"This letter presents a nonassociative algebraic framework for the representation and computation of information items in high-dimensional space. This framework is consistent with the principles of spatial computing and with the empirical findings in cognitive science about memory. Computations are performed through a process of multiplication-like binding and nonassociative interference-like bundling. Models that rely on associative bundling typically lose order information, which necessitates the use of auxiliary order structures, such as position markers, to represent sequential information that is important for cognitive tasks. In contrast, the nonassociative bundling proposed allows the construction of sparse representations of arbitrarily long sequences that maintain their temporal structure across arbitrary lengths. In this operation, noise is a constituent element of the representation of order information rather than a means of obscuring it. The nonassociative nature of the proposed framework results in the representation of a single sequence by two distinct states. The L-state, generated through left-associative bundling, continuously updates and emphasizes a recency effect, while the R-state, formed through right-associative bundling, encodes finite sequences or chunks, capturing a primacy effect. The construction of these states may be associated with activity in the prefrontal cortex in relation to short-term memory and hippocampal encoding in long-term memory, respectively. The accuracy of retrieval is contingent on a decision-making process that is based on the mutual information between the memory states and the cue. The model is able to replicate the serial position curve, which reflects the empirical recency and primacy effects observed in cognitive experiments.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1154-1170"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low-Rank, High-Order Tensor Completion via t- Product-Induced Tucker (tTucker) Decomposition","authors":"Yaodong Li;Jun Tan;Peilin Yang;Guoxu Zhou;Qibin Zhao","doi":"10.1162/neco_a_01756","DOIUrl":"10.1162/neco_a_01756","url":null,"abstract":"Recently, tensor singular value decomposition (t-SVD)–based methods were proposed to solve the low-rank tensor completion (LRTC) problem, which has achieved unprecedented success on image and video inpainting tasks. The t-SVD is limited to process third-order tensors. When faced with higher-order tensors, it reshapes them into third-order tensors, leading to the destruction of interdimensional correlations. To address this limitation, this letter introduces a tproductinduced Tucker decomposition (tTucker) model that replaces the mode product in Tucker decomposition with t-product, which jointly extends the ideas of t-SVD and high-order SVD. This letter defines the rank of the tTucker decomposition and presents an LRTC model that minimizes the induced Schatten-p norm. An efficient alternating direction multiplier method (ADMM) algorithm is developed to optimize the proposed LRTC model, and its effectiveness is demonstrated through experiments conducted on both synthetic and real data sets, showcasing excellent performance.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 6","pages":"1171-1192"},"PeriodicalIF":2.7,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144029302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Replay as a Basis for Backpropagation Through Time in the Brain","authors":"Huzi Cheng;Joshua W. Brown","doi":"10.1162/neco_a_01735","DOIUrl":"10.1162/neco_a_01735","url":null,"abstract":"How episodic memories are formed in the brain is a continuing puzzle for the neuroscience community. The brain areas that are critical for episodic learning (e.g., the hippocampus) are characterized by recurrent connectivity and generate frequent offline replay events. The function of the replay events is a subject of active debate. Recurrent connectivity, computational simulations show, enables sequence learning when combined with a suitable learning algorithm such as backpropagation through time (BPTT). BPTT, however, is not biologically plausible. We describe here, for the first time, a biologically plausible variant of BPTT in a reversible recurrent neural network, R2N2, that critically leverages offline replay to support episodic learning. The model uses forward and backward offline replay to transfer information between two recurrent neural networks, a cache and a consolidator, that perform rapid one-shot learning and statistical learning, respectively. Unlike replay in standard BPTT, this architecture requires no artificial external memory store. This approach outperforms existing solutions like random feedback local online learning and reservoir network. It also accounts for the functional significance of hippocampal replay events. We demonstrate the R2N2 network properties using benchmark tests from computer science and simulate the rodent delayed alternation T-maze task.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 3","pages":"403-436"},"PeriodicalIF":2.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gradual Domain Adaptation via Normalizing Flows","authors":"Shogo Sagawa;Hideitsu Hino","doi":"10.1162/neco_a_01734","DOIUrl":"10.1162/neco_a_01734","url":null,"abstract":"Standard domain adaptation methods do not work well when a large gap exists between the source and target domains. Gradual domain adaptation is one of the approaches used to address the problem. It involves leveraging the intermediate domain, which gradually shifts from the source domain to the target domain. In previous work, it is assumed that the number of intermediate domains is large and the distance between adjacent domains is small; hence, the gradual domain adaptation algorithm, involving self-training with unlabeled data sets, is applicable. In practice, however, gradual self-training will fail because the number of intermediate domains is limited and the distance between adjacent domains is large. We propose the use of normalizing flows to deal with this problem while maintaining the framework of unsupervised domain adaptation. The proposed method learns a transformation from the distribution of the target domains to the gaussian mixture distribution via the source domain. We evaluate our proposed method by experiments using real-world data sets and confirm that it mitigates the problem we have explained and improves the classification performance.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 3","pages":"522-568"},"PeriodicalIF":2.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncovering Dynamical Equations of Stochastic Decision Models Using Data-Driven SINDy Algorithm","authors":"Brendan Lenfesty;Saugat Bhattacharyya;KongFatt Wong-Lin","doi":"10.1162/neco_a_01736","DOIUrl":"10.1162/neco_a_01736","url":null,"abstract":"Decision formation in perceptual decision making involves sensory evidence accumulation instantiated by the temporal integration of an internal decision variable toward some decision criterion or threshold, as described by sequential sampling theoretical models. The decision variable can be represented in the form of experimentally observable neural activities. Hence, elucidating the appropriate theoretical model becomes crucial to understanding the mechanisms underlying perceptual decision formation. Existing computational methods are limited to either fitting of choice behavioral data or linear model estimation from neural activity data. In this work, we made use of sparse identification of nonlinear dynamics (SINDy), a data-driven approach, to elucidate the deterministic linear and nonlinear components of often-used stochastic decision models within reaction time task paradigms. Based on the simulated decision variable activities of the models and assuming the noise coefficient term is known beforehand, SINDy, enhanced with approaches using multiple trials, could readily estimate the deterministic terms in the dynamical equations, choice accuracy, and decision time of the models across a range of signal-to-noise ratio values. In particular, SINDy performed the best using the more memory-intensive multi-trial approach while trial-averaging of parameters performed more moderately. The single-trial approach, although expectedly not performing as well, may be useful for real-time modeling. Taken together, our work offers alternative approaches for SINDy to uncover the dynamics in perceptual decision making and, more generally, for first-passage time problems.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 3","pages":"569-587"},"PeriodicalIF":2.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10908352","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward a Free-Response Paradigm of Decision Making in Spiking Neural Networks","authors":"Zhichao Zhu;Yang Qi;Wenlian Lu;Zhigang Wang;Lu Cao;Jianfeng Feng","doi":"10.1162/neco_a_01733","DOIUrl":"10.1162/neco_a_01733","url":null,"abstract":"Spiking neural networks (SNNs) have attracted significant interest in the development of brain-inspired computing systems due to their energy efficiency and similarities to biological information processing. In contrast to continuous-valued artificial neural networks, which produce results in a single step, SNNs require multiple steps during inference to achieve a desired accuracy level, resulting in a burden in real-time response and energy efficiency. Inspired by the tradeoff between speed and accuracy in human and animal decision-making processes, which exhibit correlations among reaction times, task complexity, and decision confidence, an inquiry emerges regarding how an SNN model can benefit by implementing these attributes. Here, we introduce a theory of decision making in SNNs by untangling the interplay between signal and noise. Under this theory, we introduce a new learning objective that trains an SNN not only to make the correct decisions but also to shape its confidence. Numerical experiments demonstrate that SNNs trained in this way exhibit improved confidence expression, reduced trial-to-trial variability, and shorter latency to reach the desired accuracy. We then introduce a stopping policy that can stop inference in a way that further enhances the time efficiency of SNNs. The stopping time can serve as an indicator to whether a decision is correct, akin to the reaction time in animal behavior experiments. By integrating stochasticity into decision making, this study opens up new possibilities to explore the capabilities of SNNs and advance SNNs and their applications in complex decision-making scenarios where model performance is limited.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 3","pages":"481-521"},"PeriodicalIF":2.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10908351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Recall in Sparse Associative Memories That Use Neurogenesis","authors":"Katy Warr;Jonathon Hare;David Thomas","doi":"10.1162/neco_a_01732","DOIUrl":"10.1162/neco_a_01732","url":null,"abstract":"The creation of future low-power neuromorphic solutions requires specialist spiking neural network (SNN) algorithms that are optimized for neuromorphic settings. One such algorithmic challenge is the ability to recall learned patterns from their noisy variants. Solutions to this problem may be required to memorize vast numbers of patterns based on limited training data and subsequently recall the patterns in the presence of noise. To solve this problem, previous work has explored sparse associative memory (SAM)—associative memory neural models that exploit the principle of sparse neural coding observed in the brain. Research into a subcategory of SAM has been inspired by the biological process of adult neurogenesis, whereby new neurons are generated to facilitate adaptive and effective lifelong learning. Although these neurogenesis models have been demonstrated in previous research, they have limitations in terms of recall memory capacity and robustness to noise. In this article, we provide a unifying framework for characterizing a type of SAM network that has been pretrained using a learning strategy that incorporated a simple neurogenesis model. Using this characterization, we formally define network topology and threshold optimization methods to empirically demonstrate greater than 104 times improvement in memory capacity compared to previous work. We show that these optimizations can facilitate the development of networks that have reduced interneuron connectivity while maintaining high recall efficacy. This paves the way for ongoing research into fast, effective, low-power realizations of associative memory on neuromorphic platforms.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 3","pages":"437-480"},"PeriodicalIF":2.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Fast Algorithm for the Real-Valued Combinatorial Pure Exploration of the Multi-Armed Bandit","authors":"Shintaro Nakamura;Masashi Sugiyama","doi":"10.1162/neco_a_01728","DOIUrl":"10.1162/neco_a_01728","url":null,"abstract":"We study the real-valued combinatorial pure exploration problem in the stochastic multi-armed bandit (R-CPE-MAB). We study the case where the size of the action set is polynomial with respect to the number of arms. In such a case, the R-CPE-MAB can be seen as a special case of the so-called transductive linear bandits. We introduce the combinatorial gap-based exploration (CombGapE) algorithm, whose sample complexity upper-bound-matches the lower bound up to a problem-dependent constant factor. We numerically show that the CombGapE algorithm outperforms existing methods significantly in both synthetic and real-world data sets.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 2","pages":"294-310"},"PeriodicalIF":2.7,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}