Gabriele Scheler, Martin L Schumann, Johann Schumann
{"title":"Localist neural plasticity identified by mutual information.","authors":"Gabriele Scheler, Martin L Schumann, Johann Schumann","doi":"10.1007/s10827-025-00901-w","DOIUrl":null,"url":null,"abstract":"<p><p>We present a model of pattern memory and retrieval with novel, technically useful and biologically realistic properties. Specifically, we enter n variations of k pattern classes (n*k patterns) onto a cortex-like balanced inhibitory-excitatory network with heterogeneous neurons, and let the pattern spread within the recurrent network. We show that we can identify high mutual-information (MI) neurons as major information-bearing elements within each pattern representation. We employ a simple one-shot adaptive (learning) process focusing on high MI neurons and inhibition. Such 'localist plasticity' has high efficiency, because it requires only few adaptations for each pattern. Specifically, we store k=10 patterns of size s=400 in a 1000/1200 neuron network. We stimulate high MI neurons and in this way recall patterns, such that the whole network represents this pattern. We assess the quality of the representation (a) before learning, when entering the pattern into a naive network, (b) after learning, on the adapted network, and (c) after recall by stimulation. The recalled patterns could be easily recognized by a trained classifier. The recalled pattern 'unfolds' over the recurrent network with high similarity to the original input pattern. We discuss the distribution of neuron properties in the network, and find that an initial Gaussian distribution changes into a more heavy-tailed, lognormal distribution during the adaptation process. The remarkable result is that we are able to achieve reliable pattern recall by stimulating only high information neurons. This work provides a biologically-inspired model of cortical memory and may have interesting technical applications.</p>","PeriodicalId":54857,"journal":{"name":"Journal of Computational Neuroscience","volume":" ","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computational Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10827-025-00901-w","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
We present a model of pattern memory and retrieval with novel, technically useful and biologically realistic properties. Specifically, we enter n variations of k pattern classes (n*k patterns) onto a cortex-like balanced inhibitory-excitatory network with heterogeneous neurons, and let the pattern spread within the recurrent network. We show that we can identify high mutual-information (MI) neurons as major information-bearing elements within each pattern representation. We employ a simple one-shot adaptive (learning) process focusing on high MI neurons and inhibition. Such 'localist plasticity' has high efficiency, because it requires only few adaptations for each pattern. Specifically, we store k=10 patterns of size s=400 in a 1000/1200 neuron network. We stimulate high MI neurons and in this way recall patterns, such that the whole network represents this pattern. We assess the quality of the representation (a) before learning, when entering the pattern into a naive network, (b) after learning, on the adapted network, and (c) after recall by stimulation. The recalled patterns could be easily recognized by a trained classifier. The recalled pattern 'unfolds' over the recurrent network with high similarity to the original input pattern. We discuss the distribution of neuron properties in the network, and find that an initial Gaussian distribution changes into a more heavy-tailed, lognormal distribution during the adaptation process. The remarkable result is that we are able to achieve reliable pattern recall by stimulating only high information neurons. This work provides a biologically-inspired model of cortical memory and may have interesting technical applications.
期刊介绍:
The Journal of Computational Neuroscience provides a forum for papers that fit the interface between computational and experimental work in the neurosciences. The Journal of Computational Neuroscience publishes full length original papers, rapid communications and review articles describing theoretical and experimental work relevant to computations in the brain and nervous system. Papers that combine theoretical and experimental work are especially encouraged. Primarily theoretical papers should deal with issues of obvious relevance to biological nervous systems. Experimental papers should have implications for the computational function of the nervous system, and may report results using any of a variety of approaches including anatomy, electrophysiology, biophysics, imaging, and molecular biology. Papers investigating the physiological mechanisms underlying pathologies of the nervous system, or papers that report novel technologies of interest to researchers in computational neuroscience, including advances in neural data analysis methods yielding insights into the function of the nervous system, are also welcomed (in this case, methodological papers should include an application of the new method, exemplifying the insights that it yields).It is anticipated that all levels of analysis from cognitive to cellular will be represented in the Journal of Computational Neuroscience.