{"title":"Fusing Foveal Fixations Using Linear Retinal Transformations and Bayesian Experimental Design.","authors":"Christopher K I Williams","doi":"10.1162/neco.a.33","DOIUrl":"https://doi.org/10.1162/neco.a.33","url":null,"abstract":"<p><p>Humans (and many vertebrates) face the problem of fusing together multiple fixations of a scene in order to obtain a representation of the whole, where each fixation uses a high-resolution fovea and decreasing resolution in the periphery. In this letter, we explicitly represent the retinal transformation of a fixation as a linear downsampling of a high-resolution latent image of the scene, exploiting the known geometry. This linear transformation allows us to carry out exact inference for the latent variables in factor analysis (FA) and mixtures of FA models of the scene. This also allows us to formulate and solve the choice of where to look next as a Bayesian experimental design problem using the expected information gain criterion. Experiments on the Frey faces and MNIST data sets demonstrate the effectiveness of our models.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-22"},"PeriodicalIF":2.1,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Boosting MCTS With Free Energy Minimization.","authors":"Mawaba Pascal Dao, Adrian M Peter","doi":"10.1162/neco.a.31","DOIUrl":"https://doi.org/10.1162/neco.a.31","url":null,"abstract":"<p><p>Active inference, grounded in the free energy principle, provides a powerful lens for understanding how agents balance exploration and goal-directed behavior in uncertain environments. Here, we propose a new planning framework that integrates Monte Carlo tree search (MCTS) with active inference objectives to systematically reduce epistemic uncertainty while pursuing extrinsic rewards. Our key insight is that MCTS, already renowned for its search efficiency, can be naturally extended to incorporate free energy minimization by blending expected rewards with information gain. Concretely, the cross-entropy method (CEM) is used to optimize action proposals at the root node, while tree expansions leverage reward modeling alongside intrinsic exploration bonuses. This synergy allows our planner to maintain coherent estimates of value and uncertainty throughout planning, without sacrificing computational tractability. Empirically, we benchmark our planner on a diverse set of continuous control tasks, where it demonstrates performance gains over both stand-alone CEM and MCTS with random rollouts.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-30"},"PeriodicalIF":2.1,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Wilshin, Matthew D Kvalheim, Clayton Scott, Shai Revzen
{"title":"Estimating Phase From Observed Trajectories Using the Temporal 1-Form.","authors":"Simon Wilshin, Matthew D Kvalheim, Clayton Scott, Shai Revzen","doi":"10.1162/neco.a.32","DOIUrl":"https://doi.org/10.1162/neco.a.32","url":null,"abstract":"<p><p>Oscillators are ubiquitous in nature and are usually associated with the existence of an asymptotic phase that governs the long-term dynamics of the oscillator. We show that the asymptotic phase can be estimated using a carefully chosen series expansion that directly computes the phase response curve (PRC) and provides an algorithm for estimating the coefficients of this series. Unlike previously available data-driven phase estimation methods, our algorithm can use observations that are much shorter than a cycle; has proven convergence rate bounds as a function of the properties of measurement noise and system noise; will recover phase within any forward invariant region for which sufficient data are available; recovers the PRCs that govern weak oscillator coupling; and recovers isochron curvature and recovers nonlinear features of isochron geometry. Our method may find application wherever models of oscillator dynamics need to be constructed from measured or simulated time-series.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-47"},"PeriodicalIF":2.1,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evren Gokcen;Anna I. Jasper;Adam Kohn;Christian K. Machens;Byron M. Yu
{"title":"Fast Multigroup Gaussian Process Factor Models","authors":"Evren Gokcen;Anna I. Jasper;Adam Kohn;Christian K. Machens;Byron M. Yu","doi":"10.1162/neco.a.22","DOIUrl":"10.1162/neco.a.22","url":null,"abstract":"Gaussian processes are now commonly used in dimensionality reduction approaches tailored to neuroscience, especially to describe changes in high-dimensional neural activity over time. As recording capabilities expand to include neuronal populations across multiple brain areas, cortical layers, and cell types, interest in extending gaussian process factor models to characterize multipopulation interactions has grown. However, the cubic runtime scaling of current methods with the length of experimental trials and the number of recorded populations (groups) precludes their application to large-scale multipopulation recordings. Here, we improve this scaling from cubic to linear in both trial length and group number. We present two approximate approaches to fitting multigroup gaussian process factor models based on inducing variables and the frequency domain. Empirically, both methods achieved orders of magnitude speed-up with minimal impact on statistical performance, in simulation and on neural recordings of hundreds of neurons across three brain areas. The frequency domain approach, in particular, consistently provided the greatest runtime benefits with the fewest trade-offs in statistical performance. We further characterize the estimation biases introduced by the frequency domain approach and demonstrate effective strategies to mitigate them. This work enables a powerful class of analysis techniques to keep pace with the growing scale of multipopulation recordings, opening new avenues for exploring brain function.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1709-1782"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward Generalized Entropic Sparsification for Convolutional Neural Networks","authors":"Tin Barisin;Illia Horenko","doi":"10.1162/neco.a.21","DOIUrl":"10.1162/neco.a.21","url":null,"abstract":"Convolutional neural networks (CNNs) are reported to be overparametrized. The search for optimal (minimal) and sufficient architecture is an NP-hard problem: if the network has N neurons, then there are 2N possibilities to connect them—and therefore 2N possible architectures and 2N Boolean hyperparameters to encode them. Selecting the best possible hyperparameter out of them becomes an Np-hard problem since 2N grows in N faster then any polynomial Np. Here, we introduce a layer-by-layer data-driven pruning method based on the mathematical idea aiming at a computationally scalable entropic relaxation of the pruning problem. The sparse subnetwork is found from the pretrained (full) CNN using the network entropy minimization as a sparsity constraint. This allows deploying a numerically scalable algorithm with a sublinear scaling cost. The method is validated on several benchmarks (architectures): on MNIST (LeNet), resulting in sparsity of 55% to 84% and loss in accuracy of just 0.1% to 0.5%, and on CIFAR-10 (VGG-16, ResNet18), resulting in sparsity of 73% to 89% and loss in accuracy of 0.1% to 0.5%.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1648-1676"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Architectural Biases of the Cortical Microcircuit","authors":"Aishwarya Balwani;Suhee Cho;Hannah Choi","doi":"10.1162/neco.a.23","DOIUrl":"10.1162/neco.a.23","url":null,"abstract":"The cortex plays a crucial role in various perceptual and cognitive functions, driven by its basic unit, the canonical cortical microcircuit. Yet, we remain short of a framework that definitively explains the structure-function relationships of this fundamental neuroanatomical motif. To better understand how physical substrates of cortical circuitry facilitate their neuronal dynamics, we employ a computational approach using recurrent neural networks and representational analyses. We examine the differences manifested by the inclusion and exclusion of biologically motivated interareal laminar connections on the computational roles of different neuronal populations in the microcircuit of hierarchically related areas throughout learning. Our findings show that the presence of feedback connections correlates with the functional modularization of cortical populations in different layers and provides the microcircuit with a natural inductive bias to differentiate expected and unexpected inputs at initialization, which we justify mathematically. Furthermore, when testing the effects of training the microcircuit and its variants with a predictive-coding-inspired strategy, we find that doing so helps better encode noisy stimuli in areas of the cortex that receive feedback, all of which combine to suggest evidence for a predictive-coding mechanism serving as an intrinsic operative logic in the cortex.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1551-1599"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From Function to Implementation: Exploring Degeneracy in Evolved Artificial Agents","authors":"Zhimin Hu;Oğulcan Cingiler;Clifford Bohm;Larissa Albantakis","doi":"10.1162/neco.a.19","DOIUrl":"10.1162/neco.a.19","url":null,"abstract":"Degeneracy—the ability of different structures to perform the same function—is a fundamental feature of biological systems, contributing to their robustness and evolvability. However, the ubiquity of degeneracy in systems generated through adaptive processes complicates our understanding of the behavioral and computational strategies they employ. In this study, we investigated degeneracy in simple computational agents, known as Markov brains, trained using an artificial evolution algorithm to solve a spatial navigation task with or without associative memory. We analyzed degeneracy at three levels: behavioral, structural, and computational, with a focus on the last. Using information-theoretical concepts, Tononi et al. (1999) proposed a functional measure of degeneracy within biological networks. Here, we extended this approach to compare degeneracy across multiple networks. Using information-theoretical tools and causal analysis, we explored the computational strategies of the evolved agents and quantified their computational degeneracy. Our findings reveal a hierarchy of degenerate solutions, from varied behaviors to diverse structures and computations. Even agents with identical evolved behaviors demonstrated different underlying structures and computations. These results underscore the pervasive nature of degeneracy in neural networks, blurring the lines between the algorithmic and implementation levels in adaptive systems, and highlight the importance of advanced analytical tools to understand their complex behaviors.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1677-1708"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11180098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synergistic Pathways of Modulation Enable Robust Task Packing Within Neural Dynamics","authors":"Giacomo Vedovati;ShiNung Ching","doi":"10.1162/neco.a.18","DOIUrl":"10.1162/neco.a.18","url":null,"abstract":"Understanding how brain networks learn and manage multiple tasks simultaneously is of interest in both neuroscience and artificial intelligence. In this regard, a recent research thread in theoretical neuroscience has focused on how recurrent neural network models and their internal dynamics enact multitask learning. To manage different tasks requires a mechanism to convey information about task identity or context into the model, which from a biological perspective may involve mechanisms of neuromodulation. In this study, we use recurrent network models to probe the distinctions between two forms of contextual modulation of neural dynamics, at the level of neuronal excitability and at the level of synaptic strength. We characterize these mechanisms in terms of their functional outcomes, focusing on their robustness to context ambiguity and, relatedly, their efficiency with respect to packing multiple tasks into finite-size networks. We also demonstrate the distinction between these mechanisms at the level of the neuronal dynamics they induce. Together, these characterizations indicate complementarity and synergy in how these mechanisms act, potentially over many timescales, toward enhancing the robustness of multitask learning.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1529-1550"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring Stimulus Information Transfer Between Neural Populations Through the Communication Subspace","authors":"Oren Weiss;Ruben Coen-Cagli","doi":"10.1162/neco.a.17","DOIUrl":"10.1162/neco.a.17","url":null,"abstract":"Sensory processing arises from the communication between neural populations across multiple brain areas. While the widespread presence of neural response variability shared throughout a neural population limits the amount of stimulus-related information those populations can accurately represent, how this variability affects the interareal communication of sensory information is unknown. We propose a mathematical framework to understand the impact of neural population response variability on sensory information transmission. We combine linear Fisher information, a metric connecting stimulus representation and variability, with the framework of communication subspaces, which suggests that functional mappings between cortical populations are low-dimensional relative to the space of population activity patterns. From this, we partition Fisher information depending on the alignment between the population covariance and the mean tuning direction projected onto the communication subspace or its orthogonal complement. We provide mathematical and numerical analyses of our proposed decomposition of Fisher information and examine theoretical scenarios that demonstrate how to leverage communication subspaces for flexible routing and gating of stimulus information. This work will provide researchers investigating interareal communication with a theoretical lens through which to understand sensory information transmission and guide experimental design.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 9","pages":"1600-1647"},"PeriodicalIF":2.1,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Continuous-Time Neural Networks Can Stably Memorize Random Spike Trains","authors":"Hugo Aguettaz;Hans-Andrea Loeliger","doi":"10.1162/neco_a_01768","DOIUrl":"10.1162/neco_a_01768","url":null,"abstract":"This letter explores the capability of continuous-time recurrent neural networks to store and recall precisely timed scores of spike trains. We show (by numerical experiments) that this is indeed possible: within some range of parameters, any random score of spike trains (for all neurons in the network) can be robustly memorized and autonomously reproduced with stable accurate relative timing of all spikes, with probability close to one. We also demonstrate associative recall under noisy conditions. In these experiments, the required synaptic weights are computed offline to satisfy a template that encourages temporal stability.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 8","pages":"1439-1468"},"PeriodicalIF":2.1,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}