Jie Chang , Zhuoran Li , Zhongyi Wang , Louis Tao , Zhuo-Cheng Xiao
{"title":"将信息损失最小化,将尖峰神经网络简化为微分方程","authors":"Jie Chang , Zhuoran Li , Zhongyi Wang , Louis Tao , Zhuo-Cheng Xiao","doi":"10.1016/j.jcp.2025.114117","DOIUrl":null,"url":null,"abstract":"<div><div>Spiking neuronal networks (SNNs) are widely used in computational neuroscience, from biologically realistic modeling of local cortical networks to phenomenological modeling of the whole brain. Despite their prevalence, a systematic mathematical theory for finite-sized SNNs remains elusive, even for idealized homogeneous networks. The primary challenges are twofold: 1) the rich, parameter-sensitive SNN dynamics, and 2) the singularity and irreversibility of spikes. These challenges pose significant difficulties when relating SNNs to systems of differential equations, leading previous studies to impose additional assumptions or to focus on individual dynamic regimes. In this study, we introduce a Markov approximation of homogeneous SNN dynamics to minimize information loss when translating SNNs into ordinary differential equations. Our only assumption for the Markov approximation is the fast self-decorrelation of synaptic conductances. The system of ordinary differential equations derived from the Markov model effectively captures high-frequency partial synchrony and the metastability of finite-neuron networks produced by interacting excitatory and inhibitory populations. Besides accurately predicting dynamical statistics, such as firing rates, our theory also quantitatively captures the geometry of attractors and bifurcation structures of SNNs. Thus, our work provides a comprehensive mathematical framework that can systematically map parameters of single-neuron physiology, network coupling, and external stimuli to homogeneous SNN dynamics.</div></div>","PeriodicalId":352,"journal":{"name":"Journal of Computational Physics","volume":"537 ","pages":"Article 114117"},"PeriodicalIF":3.8000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Minimizing information loss reduces spiking neuronal networks to differential equations\",\"authors\":\"Jie Chang , Zhuoran Li , Zhongyi Wang , Louis Tao , Zhuo-Cheng Xiao\",\"doi\":\"10.1016/j.jcp.2025.114117\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Spiking neuronal networks (SNNs) are widely used in computational neuroscience, from biologically realistic modeling of local cortical networks to phenomenological modeling of the whole brain. Despite their prevalence, a systematic mathematical theory for finite-sized SNNs remains elusive, even for idealized homogeneous networks. The primary challenges are twofold: 1) the rich, parameter-sensitive SNN dynamics, and 2) the singularity and irreversibility of spikes. These challenges pose significant difficulties when relating SNNs to systems of differential equations, leading previous studies to impose additional assumptions or to focus on individual dynamic regimes. In this study, we introduce a Markov approximation of homogeneous SNN dynamics to minimize information loss when translating SNNs into ordinary differential equations. Our only assumption for the Markov approximation is the fast self-decorrelation of synaptic conductances. The system of ordinary differential equations derived from the Markov model effectively captures high-frequency partial synchrony and the metastability of finite-neuron networks produced by interacting excitatory and inhibitory populations. Besides accurately predicting dynamical statistics, such as firing rates, our theory also quantitatively captures the geometry of attractors and bifurcation structures of SNNs. Thus, our work provides a comprehensive mathematical framework that can systematically map parameters of single-neuron physiology, network coupling, and external stimuli to homogeneous SNN dynamics.</div></div>\",\"PeriodicalId\":352,\"journal\":{\"name\":\"Journal of Computational Physics\",\"volume\":\"537 \",\"pages\":\"Article 114117\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2025-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computational Physics\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0021999125004000\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Physics","FirstCategoryId":"101","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0021999125004000","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Minimizing information loss reduces spiking neuronal networks to differential equations
Spiking neuronal networks (SNNs) are widely used in computational neuroscience, from biologically realistic modeling of local cortical networks to phenomenological modeling of the whole brain. Despite their prevalence, a systematic mathematical theory for finite-sized SNNs remains elusive, even for idealized homogeneous networks. The primary challenges are twofold: 1) the rich, parameter-sensitive SNN dynamics, and 2) the singularity and irreversibility of spikes. These challenges pose significant difficulties when relating SNNs to systems of differential equations, leading previous studies to impose additional assumptions or to focus on individual dynamic regimes. In this study, we introduce a Markov approximation of homogeneous SNN dynamics to minimize information loss when translating SNNs into ordinary differential equations. Our only assumption for the Markov approximation is the fast self-decorrelation of synaptic conductances. The system of ordinary differential equations derived from the Markov model effectively captures high-frequency partial synchrony and the metastability of finite-neuron networks produced by interacting excitatory and inhibitory populations. Besides accurately predicting dynamical statistics, such as firing rates, our theory also quantitatively captures the geometry of attractors and bifurcation structures of SNNs. Thus, our work provides a comprehensive mathematical framework that can systematically map parameters of single-neuron physiology, network coupling, and external stimuli to homogeneous SNN dynamics.
期刊介绍:
Journal of Computational Physics thoroughly treats the computational aspects of physical problems, presenting techniques for the numerical solution of mathematical equations arising in all areas of physics. The journal seeks to emphasize methods that cross disciplinary boundaries.
The Journal of Computational Physics also publishes short notes of 4 pages or less (including figures, tables, and references but excluding title pages). Letters to the Editor commenting on articles already published in this Journal will also be considered. Neither notes nor letters should have an abstract.