Neural Computation最新文献

筛选
英文 中文
Continuous-Time Neural Networks Can Stably Memorize Random Spike Trains. 连续时间神经网络稳定记忆随机尖峰列车。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-20 DOI: 10.1162/neco_a_01768
Hugo Aguettaz, Hans-Andrea Loeliger
{"title":"Continuous-Time Neural Networks Can Stably Memorize Random Spike Trains.","authors":"Hugo Aguettaz, Hans-Andrea Loeliger","doi":"10.1162/neco_a_01768","DOIUrl":"https://doi.org/10.1162/neco_a_01768","url":null,"abstract":"<p><p>This letter explores the capability of continuous-time recurrent neural networks to store and recall precisely timed scores of spike trains. We show (by numerical experiments) that this is indeed possible: within some range of parameters, any random score of spike trains (for all neurons in the network) can be robustly memorized and autonomously reproduced with stable accurate relative timing of all spikes, with probability close to one. We also demonstrate associative recall under noisy conditions. In these experiments, the required synaptic weights are computed offline to satisfy a template that encourages temporal stability.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-30"},"PeriodicalIF":2.7,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Categorical Framework for Quantifying Emergent Effects in Network Topology. 一种量化网络拓扑中突发效应的分类框架。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-20 DOI: 10.1162/neco_a_01766
Johnny Jingze Li, Sebastian Pardo Guerra, Kalyan Basu, Gabriel A Silva
{"title":"A Categorical Framework for Quantifying Emergent Effects in Network Topology.","authors":"Johnny Jingze Li, Sebastian Pardo Guerra, Kalyan Basu, Gabriel A Silva","doi":"10.1162/neco_a_01766","DOIUrl":"https://doi.org/10.1162/neco_a_01766","url":null,"abstract":"<p><p>Emergent effect is crucial to understanding the properties of complex systems that do not appear in their basic units, but there has been a lack of theories to measure and understand its mechanisms. In this letter, we consider emergence as a kind of structural nonlinearity, discuss a framework based on homological algebra that encodes emergence as the mathematical structure of cohomologies, and then apply it to network models to develop a computational measure of emergence. This framework ties the potential for emergent effects of a system to its network topology and local structures, paving the way to predict and understand the cause of emergent effects. We show in our numerical experiment that our measure of emergence correlates with the existing information-theoretic measure of emergence.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-30"},"PeriodicalIF":2.7,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonlinear Neural Dynamics and Classification Accuracy in Reservoir Computing. 油藏计算中的非线性神经动力学与分类精度。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-20 DOI: 10.1162/neco_a_01770
Claus Metzner, Achim Schilling, Andreas Maier, Patrick Krauss
{"title":"Nonlinear Neural Dynamics and Classification Accuracy in Reservoir Computing.","authors":"Claus Metzner, Achim Schilling, Andreas Maier, Patrick Krauss","doi":"10.1162/neco_a_01770","DOIUrl":"https://doi.org/10.1162/neco_a_01770","url":null,"abstract":"<p><p>Reservoir computing information processing based on untrained recurrent neural networks with random connections is expected to depend on the nonlinear properties of the neurons and the resulting oscillatory, chaotic, or fixed-point dynamics of the network. However, the degree of nonlinearity required and the range of suitable dynamical regimes for a given task remain poorly understood. To clarify these issues, we study the classification accuracy of a reservoir computer in artificial tasks of varying complexity while tuning both the neuron's degree of nonlinearity and the reservoir's dynamical regime. We find that even with activation functions of extremely reduced nonlinearity, weak recurrent interactions, and small input signals, the reservoir can compute useful representations. These representations, detectable only in higher-order principal components, make complex classification tasks linearly separable for the readout layer. Increasing the recurrent coupling leads to spontaneous dynamical behavior. Nevertheless, some input-related computations can \"ride on top\" of oscillatory or fixed-point attractors with little loss of accuracy, whereas chaotic dynamics often reduces task performance. By tuning the system through the full range of dynamical phases, we observe in several classification tasks that accuracy peaks at both the oscillatory/chaotic and chaotic/fixed-point phase boundaries, supporting the edge of chaos hypothesis. We also present a regression task with the opposite behavior. Our findings, particularly the robust weakly nonlinear operating regime, may offer new perspectives for both technical and biological neural networks with random connectivity.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-36"},"PeriodicalIF":2.7,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Coding Model Detects Novelty on Different Levels of Representation Hierarchy. 预测编码模型在不同的表示层次上检测新颖性。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-20 DOI: 10.1162/neco_a_01769
T Ed Li, Mufeng Tang, Rafal Bogacz
{"title":"Predictive Coding Model Detects Novelty on Different Levels of Representation Hierarchy.","authors":"T Ed Li, Mufeng Tang, Rafal Bogacz","doi":"10.1162/neco_a_01769","DOIUrl":"https://doi.org/10.1162/neco_a_01769","url":null,"abstract":"<p><p>Novelty detection, also known as familiarity discrimination or recognition memory, refers to the ability to distinguish whether a stimulus has been seen before. It has been hypothesized that novelty detection can naturally arise within networks that store memory or learn efficient neural representation because these networks already store information on familiar stimuli. However, existing computational models supporting this idea have yet to reproduce the high capacity of human recognition memory, leaving the hypothesis in question. Thisarticle demonstrates that predictive coding, an established model previously shown to effectively support representation learning and memory, can also naturally discriminate novelty with high capacity. The predictive coding model includes neurons encoding prediction errors, and we show that these neurons produce higher activity for novel stimuli, so that the novelty can be decoded from their activity. Additionally, hierarchical predictive coding networks detect novelty at different levels of abstraction within the hierarchy, from low-level sensory features like arrangements of pixels to high-level semantic features like object identities. Overall, based on predictive coding, this article establishes a unified framework that brings together novelty detection, associative memory, and representation learning, demonstrating that a single model can capture these various cognitive functions.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-36"},"PeriodicalIF":2.7,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crystal-LSBO: Automated Design of De Novo Crystals with Latent Space Bayesian Optimization. Crystal-LSBO:基于隐空间贝叶斯优化的De Novo晶体自动设计。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-20 DOI: 10.1162/neco_a_01767
Onur Boyar, Yanheng Gu, Yuji Tanaka, Shunsuke Tonoga, Tomoya Itakura, Ichiro Takeuchi
{"title":"Crystal-LSBO: Automated Design of De Novo Crystals with Latent Space Bayesian Optimization.","authors":"Onur Boyar, Yanheng Gu, Yuji Tanaka, Shunsuke Tonoga, Tomoya Itakura, Ichiro Takeuchi","doi":"10.1162/neco_a_01767","DOIUrl":"https://doi.org/10.1162/neco_a_01767","url":null,"abstract":"<p><p>Generative modeling of crystal structures is significantly challenged by the complexity of input data, which constrains the ability of these models to explore and discover novel crystals. This complexity often confines de novo design methodologies to merely small perturbations of known crystals and hampers the effective application of advanced optimization techniques. One such optimization technique, latent space Bayesian optimization (LSBO), has demonstrated promising results in uncovering novel objects across various domains, especially when combined with variational autoencoders (VAEs). Recognizing LSBO's potential and the critical need for innovative crystal discovery, we introduce Crystal-LSBO, a de novo design framework for crystals specifically tailored to enhance explorability within LSBO frameworks. Crystal-LSBO employs multiple VAEs, each dedicated to a distinct aspect of crystal structure-lattice, coordinates, and chemical elements-orchestrated by an integrative model that synthesizes these components into a cohesive output. This setup not only streamlines the learning process but also produces explorable latent spaces thanks to the decreased complexity of the learning task for each model, enabling LSBO approaches to operate. Our study pioneers the use of LSBO for de novo crystal design, demonstrating its efficacy through optimization tasks focused mainly on formation energy values. Our results highlight the effectiveness of our methodology, offering a new perspective for de novo crystal discovery.</p>","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":" ","pages":"1-23"},"PeriodicalIF":2.7,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid Memory Encoding in a Spiking Hippocampus Circuit Model 快速记忆编码在海马峰电路模型中的应用。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01762
Jiashuo Wang;Mengwen Yuan;Jiangrong Shen;Qingao Chai;Huajin Tang
{"title":"Rapid Memory Encoding in a Spiking Hippocampus Circuit Model","authors":"Jiashuo Wang;Mengwen Yuan;Jiangrong Shen;Qingao Chai;Huajin Tang","doi":"10.1162/neco_a_01762","DOIUrl":"10.1162/neco_a_01762","url":null,"abstract":"Memory is a complex process in the brain that involves the encoding, consolidation, and retrieval of previously experienced stimuli. The brain is capable of rapidly forming memories of sensory input. However, applying the memory system to real-world data poses challenges in practical implementation. This article demonstrates that through the integration of sparse spike pattern encoding scheme population tempotron, and various spike-timing-dependent plasticity (STDP) learning rules, supported by bounded weights and biological mechanisms, it is possible to rapidly form stable neural assemblies of external sensory inputs in a spiking neural circuit model inspired by the hippocampal structure. The model employs neural ensemble module and competitive learning strategies that mimic the pattern separation mechanism of the hippocampal dentate gyrus (DG) area to achieve nonoverlapping sparse coding. It also uses population tempotron and NMDA-(N-methyl-D-aspartate)mediated STDP to construct associative and episodic memories, analogous to the CA3 and CA1 regions. These memories are represented by strongly connected neural assemblies formed within just a few trials. Overall, this model offers a robust computational framework to accommodate rapid memory throughout the brain-wide memory process.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 7","pages":"1320-1352"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decision Threshold Learning in the Basal Ganglia for Multiple Alternatives 多选项下基底神经节的决策阈值学习。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01760
Thom Griffith;Sophie-Anne Baker;Nathan F. Lepora
{"title":"Decision Threshold Learning in the Basal Ganglia for Multiple Alternatives","authors":"Thom Griffith;Sophie-Anne Baker;Nathan F. Lepora","doi":"10.1162/neco_a_01760","DOIUrl":"10.1162/neco_a_01760","url":null,"abstract":"In recent years, researchers have integrated the historically separate, reinforcement learning (RL), and evidence-accumulation-to-bound approaches to decision modeling. A particular outcome of these efforts has been the RL-DDM, a model that combines value learning through reinforcement with a diffusion decision model (DDM). While the RL-DDM is a conceptually elegant extension of the original DDM, it faces a similar problem to the DDM in that it does not scale well to decisions with more than two options. Furthermore, in its current form, the RL-DDM lacks flexibility when it comes to adapting to rapid, context-cued changes in the reward environment. The question of how to best extend combined RL and DDM models so they can handle multiple choices remains open. Moreover, it is currently unclear how these algorithmic solutions should map to neurophysical processes in the brain, particularly in relation to so-called go/no-go-type models of decision making in the basal ganglia. Here, we propose a solution that addresses these issues by combining a previously proposed decision model based on the multichoice sequential probability ratio test (MSPRT), with a dual-pathway model of decision threshold learning in the basal ganglia region of the brain. Our model learns decision thresholds to optimize the trade-off between time cost and the cost of errors and so efficiently allocates the amount of time for decision deliberation. In addition, the model is context dependent and hence flexible to changes to the speed-accuracy trade-off (SAT) in the environment. Furthermore, the model reproduces the magnitude effect, a phenomenon seen experimentally in value-based decisions and is agnostic to the types of evidence and so can be used on perceptual decisions, value-based decisions, and other types of modeled evidence. The broader significance of the model is that it contributes to the active research area of how learning systems interact by linking the previously separate models of RL-DDM to dopaminergic models of motivation and risk taking in the basal ganglia, as well as scaling to multiple alternatives.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 7","pages":"1256-1287"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Artificial Neural Networks in Human—Robot Interaction 人工神经网络在人机交互中的研究进展。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01764
Aleksandra Świetlicka
{"title":"A Survey on Artificial Neural Networks in Human—Robot Interaction","authors":"Aleksandra Świetlicka","doi":"10.1162/neco_a_01764","DOIUrl":"10.1162/neco_a_01764","url":null,"abstract":"Artificial neural networks (ANNs) have shown great potential in enhancing human-robot interaction (HRI). ANNs are computational models inspired by the structure and function of biological neural networks in the brain, which can learn from examples and generalize to new situations. ANNs can be used to enable robots to interact with humans in a more natural and intuitive way by allowing them to recognize human gestures and expressions, understand natural language, and adapt to the environment. ANNs can also be used to improve robot autonomy, allowing robots to learn from their interactions with humans and to make more informed decisions. However, there are also challenges to using ANNs in HRI, including the need for large amounts of training data, issues with explainability, and the potential for bias. This review explores the current state of research on ANNs in HRI, highlighting both the opportunities and challenges of this approach and discussing potential directions for future research. The AI contribution involves applying ANNs to various aspects of HRI, while the application in engineering involves using ANNs to develop more interactive and intuitive robotic systems.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 7","pages":"1193-1255"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Closed-Loop Multistep Planning 闭环多步骤规划。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01761
Giulia Lafratta;Bernd Porr;Christopher Chandler;Alice Miller
{"title":"Closed-Loop Multistep Planning","authors":"Giulia Lafratta;Bernd Porr;Christopher Chandler;Alice Miller","doi":"10.1162/neco_a_01761","DOIUrl":"10.1162/neco_a_01761","url":null,"abstract":"Living organisms interact with their surroundings in a closed-loop fashion, where sensory inputs dictate the initiation and termination of behaviors. Even simple animals are able to develop and execute complex plans, which has not yet been replicated in robotics using pure closed-loop input control. We propose a solution to this problem by defining a set of discrete and temporary closed-loop controllers, called “Tasks,” each representing a closed-loop behavior. We further introduce a supervisory module that has an innate understanding of physics and causality, through which it can simulate the execution of Task sequences over time and store the results in a model of the environment. On the basis of this model, plans can be made by chaining temporary closed-loop controllers. Our proposed framework was implemented for a robot and tested in two scenarios as proof of concept.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 7","pages":"1288-1319"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Excitation–Inhibition Balance Controls Synchronization in a Simple Model of Coupled Phase Oscillators 激励-抑制平衡控制耦合相位振荡器的简单模型同步。
IF 2.7 4区 计算机科学
Neural Computation Pub Date : 2025-06-17 DOI: 10.1162/neco_a_01763
Satoshi Kuroki;Kenji Mizuseki
{"title":"Excitation–Inhibition Balance Controls Synchronization in a Simple Model of Coupled Phase Oscillators","authors":"Satoshi Kuroki;Kenji Mizuseki","doi":"10.1162/neco_a_01763","DOIUrl":"10.1162/neco_a_01763","url":null,"abstract":"Collective neuronal activity in the brain synchronizes during rest and desynchronizes during active behaviors, influencing cognitive processes such as memory consolidation, knowledge abstraction, and creative thinking. These states involve significant modulation of inhibition, which alters the excitation–inhibition (EI) balance of synaptic inputs. However, the influence of the EI balance on collective neuronal oscillation remains only partially understood. In this study, we introduce the EI-Kuramoto model, a modified version of the Kuramoto model, in which oscillators are categorized into excitatory and inhibitory groups with four distinct interaction types: excitatory–excitatory, excitatory–inhibitory, inhibitory–excitatory, and inhibitory–inhibitory. Numerical simulations identify three dynamic states—synchronized, bistable, and desynchronized—that can be controlled by adjusting the strength of the four interaction types. Theoretical analysis further demonstrates that the balance among these interactions plays a critical role in determining the dynamic states. This study provides valuable insights into the role of EI balance in synchronizing coupled oscillators and neurons.","PeriodicalId":54731,"journal":{"name":"Neural Computation","volume":"37 7","pages":"1353-1372"},"PeriodicalIF":2.7,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11048764","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信