Farideh Motaghian, Soheila Nazari, Reza Jafari, Juan P. Dominguez-Morales
{"title":"模块化和稀疏复杂网络在增强液态机连接模式中的应用","authors":"Farideh Motaghian, Soheila Nazari, Reza Jafari, Juan P. Dominguez-Morales","doi":"10.1016/j.chaos.2024.115940","DOIUrl":null,"url":null,"abstract":"Different neurons in biological brain systems can self-organize to create distinct neural circuits that enable a range of cognitive activities. Spiking neural networks (SNNs), which have higher biological and processing capacity than traditional neural networks, are one field of investigation for brain-like computing. A neural computational model with a recurrent network structure based on SNN is a liquid state machine (LSM). This research proposes a novel LSM structure, where the output layer comprises classification pyramid neurons, the intermediate layer is the liquid layer, and the input layer is generated from the retina model. In this research, the liquid layer is considered a modular complex network. The number of clusters in the liquid layer corresponds to the number of hidden patterns in the data, thus increasing the classification accuracy in the data. As this network is sparse, the computational time can be reduced, and the network learns faster than a fully connected network. Using this concept, we can expand the interior of the liquid layer in the LSM into some clusters rather than taking random connections into account as in other studies. Subsequently, an unsupervised Power-Spike Time Dependent Plasticity (Pow-STDP) learning technique is considered to optimize the synaptic connections between the liquid and output layers. The performance of the suggested LSM structure was very impressive compared to deep and spiking classification networks using three challenging datasets: MNIST, CIFAR-10, and CIFAR-100. Accuracy improvements over previous spiking networks were demonstrated by the accuracy of 98.1 % (6 training epochs), 95.4 % (6 training epochs), and 75.52 % (20 training epochs) that were obtained, respectively. The suggested network not only demonstrates more accuracy when compared to earlier spike-based learning techniques, but it also has a faster rate of convergence during the training phase. The benefits of the suggested network include unsupervised learning, minimal power consumption if used on neuromorphic devices, higher classification accuracy, and lower training epochs (higher training speed).","PeriodicalId":9764,"journal":{"name":"Chaos Solitons & Fractals","volume":"32 1","pages":""},"PeriodicalIF":5.3000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Application of modular and sparse complex networks in enhancing connectivity patterns of liquid state machines\",\"authors\":\"Farideh Motaghian, Soheila Nazari, Reza Jafari, Juan P. Dominguez-Morales\",\"doi\":\"10.1016/j.chaos.2024.115940\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Different neurons in biological brain systems can self-organize to create distinct neural circuits that enable a range of cognitive activities. Spiking neural networks (SNNs), which have higher biological and processing capacity than traditional neural networks, are one field of investigation for brain-like computing. A neural computational model with a recurrent network structure based on SNN is a liquid state machine (LSM). This research proposes a novel LSM structure, where the output layer comprises classification pyramid neurons, the intermediate layer is the liquid layer, and the input layer is generated from the retina model. In this research, the liquid layer is considered a modular complex network. The number of clusters in the liquid layer corresponds to the number of hidden patterns in the data, thus increasing the classification accuracy in the data. As this network is sparse, the computational time can be reduced, and the network learns faster than a fully connected network. Using this concept, we can expand the interior of the liquid layer in the LSM into some clusters rather than taking random connections into account as in other studies. Subsequently, an unsupervised Power-Spike Time Dependent Plasticity (Pow-STDP) learning technique is considered to optimize the synaptic connections between the liquid and output layers. The performance of the suggested LSM structure was very impressive compared to deep and spiking classification networks using three challenging datasets: MNIST, CIFAR-10, and CIFAR-100. Accuracy improvements over previous spiking networks were demonstrated by the accuracy of 98.1 % (6 training epochs), 95.4 % (6 training epochs), and 75.52 % (20 training epochs) that were obtained, respectively. The suggested network not only demonstrates more accuracy when compared to earlier spike-based learning techniques, but it also has a faster rate of convergence during the training phase. The benefits of the suggested network include unsupervised learning, minimal power consumption if used on neuromorphic devices, higher classification accuracy, and lower training epochs (higher training speed).\",\"PeriodicalId\":9764,\"journal\":{\"name\":\"Chaos Solitons & Fractals\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-12-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Chaos Solitons & Fractals\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1016/j.chaos.2024.115940\",\"RegionNum\":1,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos Solitons & Fractals","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1016/j.chaos.2024.115940","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Application of modular and sparse complex networks in enhancing connectivity patterns of liquid state machines
Different neurons in biological brain systems can self-organize to create distinct neural circuits that enable a range of cognitive activities. Spiking neural networks (SNNs), which have higher biological and processing capacity than traditional neural networks, are one field of investigation for brain-like computing. A neural computational model with a recurrent network structure based on SNN is a liquid state machine (LSM). This research proposes a novel LSM structure, where the output layer comprises classification pyramid neurons, the intermediate layer is the liquid layer, and the input layer is generated from the retina model. In this research, the liquid layer is considered a modular complex network. The number of clusters in the liquid layer corresponds to the number of hidden patterns in the data, thus increasing the classification accuracy in the data. As this network is sparse, the computational time can be reduced, and the network learns faster than a fully connected network. Using this concept, we can expand the interior of the liquid layer in the LSM into some clusters rather than taking random connections into account as in other studies. Subsequently, an unsupervised Power-Spike Time Dependent Plasticity (Pow-STDP) learning technique is considered to optimize the synaptic connections between the liquid and output layers. The performance of the suggested LSM structure was very impressive compared to deep and spiking classification networks using three challenging datasets: MNIST, CIFAR-10, and CIFAR-100. Accuracy improvements over previous spiking networks were demonstrated by the accuracy of 98.1 % (6 training epochs), 95.4 % (6 training epochs), and 75.52 % (20 training epochs) that were obtained, respectively. The suggested network not only demonstrates more accuracy when compared to earlier spike-based learning techniques, but it also has a faster rate of convergence during the training phase. The benefits of the suggested network include unsupervised learning, minimal power consumption if used on neuromorphic devices, higher classification accuracy, and lower training epochs (higher training speed).
期刊介绍:
Chaos, Solitons & Fractals strives to establish itself as a premier journal in the interdisciplinary realm of Nonlinear Science, Non-equilibrium, and Complex Phenomena. It welcomes submissions covering a broad spectrum of topics within this field, including dynamics, non-equilibrium processes in physics, chemistry, and geophysics, complex matter and networks, mathematical models, computational biology, applications to quantum and mesoscopic phenomena, fluctuations and random processes, self-organization, and social phenomena.