在 STACS 中扩展神经模拟

Felix Wang, Shruti Kulkarni, Bradley H. Theilman, Fredrick Rothganger, C. Schuman, Seung-Hwan Lim, J. Aimone
{"title":"在 STACS 中扩展神经模拟","authors":"Felix Wang, Shruti Kulkarni, Bradley H. Theilman, Fredrick Rothganger, C. Schuman, Seung-Hwan Lim, J. Aimone","doi":"10.1088/2634-4386/ad3be7","DOIUrl":null,"url":null,"abstract":"\n As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on STACS (Simulation Tool for Asynchronous Cortical Streams), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.","PeriodicalId":198030,"journal":{"name":"Neuromorphic Computing and Engineering","volume":"45 5","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Scaling neural simulations in STACS\",\"authors\":\"Felix Wang, Shruti Kulkarni, Bradley H. Theilman, Fredrick Rothganger, C. Schuman, Seung-Hwan Lim, J. Aimone\",\"doi\":\"10.1088/2634-4386/ad3be7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on STACS (Simulation Tool for Asynchronous Cortical Streams), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.\",\"PeriodicalId\":198030,\"journal\":{\"name\":\"Neuromorphic Computing and Engineering\",\"volume\":\"45 5\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuromorphic Computing and Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/2634-4386/ad3be7\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuromorphic Computing and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2634-4386/ad3be7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随着现代神经科学工具获得更多有关大脑的细节,对生物级神经模拟的需求不断增长。然而,有效的大规模模拟仍然是一项挑战。除了实现并行执行所需的工具外,还有突触互连的独特结构,这种结构在全局上是稀疏的,但每个神经元的连接密度和非局部交互相对较高。在高性能计算应用中还需要考虑各种实际问题,例如需要对神经网络进行序列化,以支持可能需要检查点重启的长时间运行模拟。虽然在神经形态硬件上进行加速也是一种可能,但这一领域的开发可能很困难,因为不同平台的硬件支持往往各不相同,而软件对更大规模模型的支持也往往有限。在本文中,我们将注意力集中在 STACS(异步皮质流仿真工具)上,这是一个利用 Charm++ 并行编程框架的尖峰神经网络仿真器,目标是支持生物规模的仿真以及平台间的互操作性。这些目标的核心是实现可扩展的数据结构,以便在并行分区中有效地分配网络。在这里,我们将讨论一种并行数据格式的直接扩展,这种格式在图分区器中已有使用历史,也可作为不同神经形态后端的可移植中间表示。我们在 Summit 超级计算机上进行了扩展研究,考察了 STACS 在网络构建和存储、分区和执行方面的能力。我们强调了适当分区、空间依赖性的突触结构如何引入非常适合 Charm++ 支持的组播通信的通信工作量。我们评估了数百万神经元和数十亿突触数量级网络的强和弱扩展行为,结果表明 STACS 实现了具有竞争力的并行效率水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Scaling neural simulations in STACS
As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on STACS (Simulation Tool for Asynchronous Cortical Streams), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信