{"title":"Sparse connectivity enables efficient information processing in cortex-like artificial neural networks.","authors":"Rieke Fruengel, Marcel Oberlaender","doi":"10.3389/fncir.2025.1528309","DOIUrl":null,"url":null,"abstract":"<p><p>Neurons in cortical networks are very sparsely connected; even neurons whose axons and dendrites overlap are highly unlikely to form a synaptic connection. What is the relevance of such sparse connectivity for a network's function? Surprisingly, it has been shown that sparse connectivity impairs information processing in artificial neural networks (ANNs). Does this imply that sparse connectivity also impairs information processing in biological neural networks? Although ANNs were originally inspired by the brain, conventional ANNs differ substantially in their structural network architecture from cortical networks. To disentangle the relevance of these structural properties for information processing in networks, we systematically constructed ANNs constrained by interpretable features of cortical networks. We find that in large and recurrently connected networks, as are found in the cortex, sparse connectivity facilitates time- and data-efficient information processing. We explore the origins of these surprising findings and show that conventional dense ANNs distribute information across only a very small fraction of nodes, whereas sparse ANNs distribute information across more nodes. We show that sparsity is most critical in networks with fixed excitatory and inhibitory nodes, mirroring neuronal cell types in cortex. This constraint causes a large learning delay in densely connected networks which is eliminated by sparse connectivity. Taken together, our findings show that sparse connectivity enables efficient information processing given key constraints from cortical networks, setting the stage for further investigation into higher-order features of cortical connectivity.</p>","PeriodicalId":12498,"journal":{"name":"Frontiers in Neural Circuits","volume":"19 ","pages":"1528309"},"PeriodicalIF":3.4000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11966417/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neural Circuits","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fncir.2025.1528309","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Neurons in cortical networks are very sparsely connected; even neurons whose axons and dendrites overlap are highly unlikely to form a synaptic connection. What is the relevance of such sparse connectivity for a network's function? Surprisingly, it has been shown that sparse connectivity impairs information processing in artificial neural networks (ANNs). Does this imply that sparse connectivity also impairs information processing in biological neural networks? Although ANNs were originally inspired by the brain, conventional ANNs differ substantially in their structural network architecture from cortical networks. To disentangle the relevance of these structural properties for information processing in networks, we systematically constructed ANNs constrained by interpretable features of cortical networks. We find that in large and recurrently connected networks, as are found in the cortex, sparse connectivity facilitates time- and data-efficient information processing. We explore the origins of these surprising findings and show that conventional dense ANNs distribute information across only a very small fraction of nodes, whereas sparse ANNs distribute information across more nodes. We show that sparsity is most critical in networks with fixed excitatory and inhibitory nodes, mirroring neuronal cell types in cortex. This constraint causes a large learning delay in densely connected networks which is eliminated by sparse connectivity. Taken together, our findings show that sparse connectivity enables efficient information processing given key constraints from cortical networks, setting the stage for further investigation into higher-order features of cortical connectivity.
期刊介绍:
Frontiers in Neural Circuits publishes rigorously peer-reviewed research on the emergent properties of neural circuits - the elementary modules of the brain. Specialty Chief Editors Takao K. Hensch and Edward Ruthazer at Harvard University and McGill University respectively, are supported by an outstanding Editorial Board of international experts. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics and the public worldwide.
Frontiers in Neural Circuits launched in 2011 with great success and remains a "central watering hole" for research in neural circuits, serving the community worldwide to share data, ideas and inspiration. Articles revealing the anatomy, physiology, development or function of any neural circuitry in any species (from sponges to humans) are welcome. Our common thread seeks the computational strategies used by different circuits to link their structure with function (perceptual, motor, or internal), the general rules by which they operate, and how their particular designs lead to the emergence of complex properties and behaviors. Submissions focused on synaptic, cellular and connectivity principles in neural microcircuits using multidisciplinary approaches, especially newer molecular, developmental and genetic tools, are encouraged. Studies with an evolutionary perspective to better understand how circuit design and capabilities evolved to produce progressively more complex properties and behaviors are especially welcome. The journal is further interested in research revealing how plasticity shapes the structural and functional architecture of neural circuits.