Javier Hernández-Tello, Miguel Ángel Martínez-del-Amor, David Orellana-Martín, Francis George C. Cabarle
{"title":"Sparse Spiking Neural-like Membrane Systems on Graphics Processing Units","authors":"Javier Hernández-Tello, Miguel Ángel Martínez-del-Amor, David Orellana-Martín, Francis George C. Cabarle","doi":"arxiv-2408.04343","DOIUrl":null,"url":null,"abstract":"The parallel simulation of Spiking Neural P systems is mainly based on a\nmatrix representation, where the graph inherent to the neural model is encoded\nin an adjacency matrix. The simulation algorithm is based on a matrix-vector\nmultiplication, which is an operation efficiently implemented on parallel\ndevices. However, when the graph of a Spiking Neural P system is not fully\nconnected, the adjacency matrix is sparse and hence, lots of computing\nresources are wasted in both time and memory domains. For this reason, two\ncompression methods for the matrix representation were proposed in a previous\nwork, but they were not implemented nor parallelized on a simulator. In this\npaper, they are implemented and parallelized on GPUs as part of a new Spiking\nNeural P system with delays simulator. Extensive experiments are conducted on\nhigh-end GPUs (RTX2080 and A100 80GB), and it is concluded that they outperform\nother solutions based on state-of-the-art GPU libraries when simulating Spiking\nNeural P systems.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"49 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The parallel simulation of Spiking Neural P systems is mainly based on a
matrix representation, where the graph inherent to the neural model is encoded
in an adjacency matrix. The simulation algorithm is based on a matrix-vector
multiplication, which is an operation efficiently implemented on parallel
devices. However, when the graph of a Spiking Neural P system is not fully
connected, the adjacency matrix is sparse and hence, lots of computing
resources are wasted in both time and memory domains. For this reason, two
compression methods for the matrix representation were proposed in a previous
work, but they were not implemented nor parallelized on a simulator. In this
paper, they are implemented and parallelized on GPUs as part of a new Spiking
Neural P system with delays simulator. Extensive experiments are conducted on
high-end GPUs (RTX2080 and A100 80GB), and it is concluded that they outperform
other solutions based on state-of-the-art GPU libraries when simulating Spiking
Neural P systems.
尖峰神经 P 系统的并行仿真主要基于矩阵表示法,其中神经模型的固有图被编码为邻接矩阵。仿真算法基于矩阵向量相乘,这是在并行设备上高效实现的操作。然而,当尖峰神经 P 系统的图不是完全连接时,邻接矩阵是稀疏的,因此在时间和内存领域都会浪费大量计算资源。为此,前人提出了矩阵表示的两种压缩方法,但没有在模拟器上实现或并行化。本文在 GPU 上实现了这两种方法,并将其并行化,作为带有延迟模拟器的新型 SpikingNeural P 系统的一部分。在高端 GPU(RTX2080 和 A100 80GB)上进行了广泛的实验,结论是在模拟尖峰神经 P 系统时,它们优于基于最先进 GPU 库的其他解决方案。