Active intrinsic conductances in recurrent networks allow for long-lasting transients and sustained activity with realistic firing rates as well as robust plasticity.
IF 1.5 4区 医学Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY
{"title":"Active intrinsic conductances in recurrent networks allow for long-lasting transients and sustained activity with realistic firing rates as well as robust plasticity.","authors":"Tuba Aksoy, Harel Z Shouval","doi":"10.1007/s10827-021-00797-2","DOIUrl":null,"url":null,"abstract":"<p><p>Recurrent neural networks of spiking neurons can exhibit long lasting and even persistent activity. Such networks are often not robust and exhibit spike and firing rate statistics that are inconsistent with experimental observations. In order to overcome this problem most previous models had to assume that recurrent connections are dominated by slower NMDA type excitatory receptors. Usually, the single neurons within these networks are very simple leaky integrate and fire neurons or other low dimensional model neurons. However real neurons are much more complex, and exhibit a plethora of active conductances which are recruited both at the sub and supra threshold regimes. Here we show that by including a small number of additional active conductances we can produce recurrent networks that are both more robust and exhibit firing-rate statistics that are more consistent with experimental results. We show that this holds both for bi-stable recurrent networks, which are thought to underlie working memory and for slowly decaying networks which might underlie the estimation of interval timing. We also show that by including these conductances, such networks can be trained to using a simple learning rule to predict temporal intervals that are an order of magnitude larger than those that can be trained in networks of leaky integrate and fire neurons.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":"50 1","pages":"121-132"},"PeriodicalIF":1.5000,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8818023/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10827-021-00797-2","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Recurrent neural networks of spiking neurons can exhibit long lasting and even persistent activity. Such networks are often not robust and exhibit spike and firing rate statistics that are inconsistent with experimental observations. In order to overcome this problem most previous models had to assume that recurrent connections are dominated by slower NMDA type excitatory receptors. Usually, the single neurons within these networks are very simple leaky integrate and fire neurons or other low dimensional model neurons. However real neurons are much more complex, and exhibit a plethora of active conductances which are recruited both at the sub and supra threshold regimes. Here we show that by including a small number of additional active conductances we can produce recurrent networks that are both more robust and exhibit firing-rate statistics that are more consistent with experimental results. We show that this holds both for bi-stable recurrent networks, which are thought to underlie working memory and for slowly decaying networks which might underlie the estimation of interval timing. We also show that by including these conductances, such networks can be trained to using a simple learning rule to predict temporal intervals that are an order of magnitude larger than those that can be trained in networks of leaky integrate and fire neurons.
期刊介绍:
The Journal of Computational Neuroscience provides a forum for papers that fit the interface between computational and experimental work in the neurosciences. The Journal of Computational Neuroscience publishes full length original papers, rapid communications and review articles describing theoretical and experimental work relevant to computations in the brain and nervous system. Papers that combine theoretical and experimental work are especially encouraged. Primarily theoretical papers should deal with issues of obvious relevance to biological nervous systems. Experimental papers should have implications for the computational function of the nervous system, and may report results using any of a variety of approaches including anatomy, electrophysiology, biophysics, imaging, and molecular biology. Papers investigating the physiological mechanisms underlying pathologies of the nervous system, or papers that report novel technologies of interest to researchers in computational neuroscience, including advances in neural data analysis methods yielding insights into the function of the nervous system, are also welcomed (in this case, methodological papers should include an application of the new method, exemplifying the insights that it yields).It is anticipated that all levels of analysis from cognitive to cellular will be represented in the Journal of Computational Neuroscience.