图形处理器上的稀疏尖峰神经膜系统

Javier Hernández-Tello, Miguel Ángel Martínez-del-Amor, David Orellana-Martín, Francis George C. Cabarle
{"title":"图形处理器上的稀疏尖峰神经膜系统","authors":"Javier Hernández-Tello, Miguel Ángel Martínez-del-Amor, David Orellana-Martín, Francis George C. Cabarle","doi":"arxiv-2408.04343","DOIUrl":null,"url":null,"abstract":"The parallel simulation of Spiking Neural P systems is mainly based on a\nmatrix representation, where the graph inherent to the neural model is encoded\nin an adjacency matrix. The simulation algorithm is based on a matrix-vector\nmultiplication, which is an operation efficiently implemented on parallel\ndevices. However, when the graph of a Spiking Neural P system is not fully\nconnected, the adjacency matrix is sparse and hence, lots of computing\nresources are wasted in both time and memory domains. For this reason, two\ncompression methods for the matrix representation were proposed in a previous\nwork, but they were not implemented nor parallelized on a simulator. In this\npaper, they are implemented and parallelized on GPUs as part of a new Spiking\nNeural P system with delays simulator. Extensive experiments are conducted on\nhigh-end GPUs (RTX2080 and A100 80GB), and it is concluded that they outperform\nother solutions based on state-of-the-art GPU libraries when simulating Spiking\nNeural P systems.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"49 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sparse Spiking Neural-like Membrane Systems on Graphics Processing Units\",\"authors\":\"Javier Hernández-Tello, Miguel Ángel Martínez-del-Amor, David Orellana-Martín, Francis George C. Cabarle\",\"doi\":\"arxiv-2408.04343\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The parallel simulation of Spiking Neural P systems is mainly based on a\\nmatrix representation, where the graph inherent to the neural model is encoded\\nin an adjacency matrix. The simulation algorithm is based on a matrix-vector\\nmultiplication, which is an operation efficiently implemented on parallel\\ndevices. However, when the graph of a Spiking Neural P system is not fully\\nconnected, the adjacency matrix is sparse and hence, lots of computing\\nresources are wasted in both time and memory domains. For this reason, two\\ncompression methods for the matrix representation were proposed in a previous\\nwork, but they were not implemented nor parallelized on a simulator. In this\\npaper, they are implemented and parallelized on GPUs as part of a new Spiking\\nNeural P system with delays simulator. Extensive experiments are conducted on\\nhigh-end GPUs (RTX2080 and A100 80GB), and it is concluded that they outperform\\nother solutions based on state-of-the-art GPU libraries when simulating Spiking\\nNeural P systems.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"49 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.04343\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04343","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

尖峰神经 P 系统的并行仿真主要基于矩阵表示法,其中神经模型的固有图被编码为邻接矩阵。仿真算法基于矩阵向量相乘,这是在并行设备上高效实现的操作。然而,当尖峰神经 P 系统的图不是完全连接时,邻接矩阵是稀疏的,因此在时间和内存领域都会浪费大量计算资源。为此,前人提出了矩阵表示的两种压缩方法,但没有在模拟器上实现或并行化。本文在 GPU 上实现了这两种方法,并将其并行化,作为带有延迟模拟器的新型 SpikingNeural P 系统的一部分。在高端 GPU(RTX2080 和 A100 80GB)上进行了广泛的实验,结论是在模拟尖峰神经 P 系统时,它们优于基于最先进 GPU 库的其他解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Sparse Spiking Neural-like Membrane Systems on Graphics Processing Units
The parallel simulation of Spiking Neural P systems is mainly based on a matrix representation, where the graph inherent to the neural model is encoded in an adjacency matrix. The simulation algorithm is based on a matrix-vector multiplication, which is an operation efficiently implemented on parallel devices. However, when the graph of a Spiking Neural P system is not fully connected, the adjacency matrix is sparse and hence, lots of computing resources are wasted in both time and memory domains. For this reason, two compression methods for the matrix representation were proposed in a previous work, but they were not implemented nor parallelized on a simulator. In this paper, they are implemented and parallelized on GPUs as part of a new Spiking Neural P system with delays simulator. Extensive experiments are conducted on high-end GPUs (RTX2080 and A100 80GB), and it is concluded that they outperform other solutions based on state-of-the-art GPU libraries when simulating Spiking Neural P systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信