{"title":"Relating sparse and predictive coding to divisive normalization.","authors":"Yanbo Lian, Anthony N Burkitt","doi":"10.1371/journal.pcbi.1013059","DOIUrl":null,"url":null,"abstract":"<p><p>Sparse coding, predictive coding and divisive normalization have each been found to be principles that underlie the function of neural circuits in many parts of the brain, supported by substantial experimental evidence. However, the connections between these related principles are still poorly understood. Sparse coding and predictive coding can be reconciled into a learning framework with predictive structure and sparse responses, termed as sparse/predictive coding. However, how sparse/predictive coding (a learning model) is connected with divisive normalization (not a learning model) is still not well investigated. In this paper, we show how sparse coding, predictive coding, and divisive normalization can be described within a unified framework, and illustrate this explicitly within the context of a two-layer neural learning model of sparse/predictive coding. This two-layer model is constructed in a way that implements sparse coding with a network structure that is constructed by implementing predictive coding. We demonstrate how a homeostatic function that regulates neural responses in the model can shape the nonlinearity of neural responses in a way that replicates different forms of divisive normalization. Simulations show that the model can learn simple cells in the primary visual cortex with the property of contrast saturation, which has previously been explained by divisive normalization. In summary, the study demonstrates that the three principles of sparse coding, predictive coding, and divisive normalization can be connected to provide a learning framework based on biophysical properties, such as Hebbian learning and homeostasis, and this framework incorporates both learning and more diverse response nonlinearities observed experimentally. This framework has the potential to also be used to explain how the brain learns to integrate input from different sensory modalities.</p>","PeriodicalId":20241,"journal":{"name":"PLoS Computational Biology","volume":"21 5","pages":"e1013059"},"PeriodicalIF":3.8000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12112309/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLoS Computational Biology","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1371/journal.pcbi.1013059","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/5/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
Sparse coding, predictive coding and divisive normalization have each been found to be principles that underlie the function of neural circuits in many parts of the brain, supported by substantial experimental evidence. However, the connections between these related principles are still poorly understood. Sparse coding and predictive coding can be reconciled into a learning framework with predictive structure and sparse responses, termed as sparse/predictive coding. However, how sparse/predictive coding (a learning model) is connected with divisive normalization (not a learning model) is still not well investigated. In this paper, we show how sparse coding, predictive coding, and divisive normalization can be described within a unified framework, and illustrate this explicitly within the context of a two-layer neural learning model of sparse/predictive coding. This two-layer model is constructed in a way that implements sparse coding with a network structure that is constructed by implementing predictive coding. We demonstrate how a homeostatic function that regulates neural responses in the model can shape the nonlinearity of neural responses in a way that replicates different forms of divisive normalization. Simulations show that the model can learn simple cells in the primary visual cortex with the property of contrast saturation, which has previously been explained by divisive normalization. In summary, the study demonstrates that the three principles of sparse coding, predictive coding, and divisive normalization can be connected to provide a learning framework based on biophysical properties, such as Hebbian learning and homeostasis, and this framework incorporates both learning and more diverse response nonlinearities observed experimentally. This framework has the potential to also be used to explain how the brain learns to integrate input from different sensory modalities.
期刊介绍:
PLOS Computational Biology features works of exceptional significance that further our understanding of living systems at all scales—from molecules and cells, to patient populations and ecosystems—through the application of computational methods. Readers include life and computational scientists, who can take the important findings presented here to the next level of discovery.
Research articles must be declared as belonging to a relevant section. More information about the sections can be found in the submission guidelines.
Research articles should model aspects of biological systems, demonstrate both methodological and scientific novelty, and provide profound new biological insights.
Generally, reliability and significance of biological discovery through computation should be validated and enriched by experimental studies. Inclusion of experimental validation is not required for publication, but should be referenced where possible. Inclusion of experimental validation of a modest biological discovery through computation does not render a manuscript suitable for PLOS Computational Biology.
Research articles specifically designated as Methods papers should describe outstanding methods of exceptional importance that have been shown, or have the promise to provide new biological insights. The method must already be widely adopted, or have the promise of wide adoption by a broad community of users. Enhancements to existing published methods will only be considered if those enhancements bring exceptional new capabilities.