Scalable event-driven native parallel processing: the SpiNNaker neuromimetic system

Alexander D. Rast, Xin Jin, F. Galluppi, L. Plana, Cameron Patterson, S. Furber
{"title":"Scalable event-driven native parallel processing: the SpiNNaker neuromimetic system","authors":"Alexander D. Rast, Xin Jin, F. Galluppi, L. Plana, Cameron Patterson, S. Furber","doi":"10.1145/1787275.1787279","DOIUrl":null,"url":null,"abstract":"Neural networks present a fundamentally different model of computation from the conventional sequential digital model. Modelling large networks on conventional hardware thus tends to be inefficient if not impossible. Neither dedicated neural chips, with model limitations, nor FPGA implementations, with scalability limitations, offer a satisfactory solution even though they have improved simulation performance dramatically. SpiNNaker introduces a different approach, the \"neuromimetic\" architecture, that maintains the neural optimisation of dedicated chips while offering FPGA-like universal configurability. Central to this parallel multiprocessor is an asynchronous event-driven model that uses interrupt-generating dedicated hardware on the chip to support real-time neural simulation. While this architecture is particularly suitable for spiking models, it can also implement \"classical\" neural models like the MLP efficiently. Nonetheless, event handling, particularly servicing incoming packets, requires careful and innovative design in order to avoid local processor congestion and possible deadlock. Using two exemplar models, a spiking network using Izhikevich neurons, and an MLP network, we illustrate how to implement efficient service routines to handle input events. These routines form the beginnings of a library of \"drop-in\" neural components. Ultimately, the goal is the creation of a library-based development system that allows the modeller to describe a model in a high-level neural description environment of his choice and use an automated tool chain to create the appropriate SpiNNaker instantiation. The complete system: universal hardware, automated tool chain, embedded system management, represents the \"ideal\" neural modelling environment: a general-purpose platform that can generate an arbitrary neural network and run it with hardware speed and scale.","PeriodicalId":151791,"journal":{"name":"Proceedings of the 7th ACM international conference on Computing frontiers","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"45","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th ACM international conference on Computing frontiers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1787275.1787279","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 45

Abstract

Neural networks present a fundamentally different model of computation from the conventional sequential digital model. Modelling large networks on conventional hardware thus tends to be inefficient if not impossible. Neither dedicated neural chips, with model limitations, nor FPGA implementations, with scalability limitations, offer a satisfactory solution even though they have improved simulation performance dramatically. SpiNNaker introduces a different approach, the "neuromimetic" architecture, that maintains the neural optimisation of dedicated chips while offering FPGA-like universal configurability. Central to this parallel multiprocessor is an asynchronous event-driven model that uses interrupt-generating dedicated hardware on the chip to support real-time neural simulation. While this architecture is particularly suitable for spiking models, it can also implement "classical" neural models like the MLP efficiently. Nonetheless, event handling, particularly servicing incoming packets, requires careful and innovative design in order to avoid local processor congestion and possible deadlock. Using two exemplar models, a spiking network using Izhikevich neurons, and an MLP network, we illustrate how to implement efficient service routines to handle input events. These routines form the beginnings of a library of "drop-in" neural components. Ultimately, the goal is the creation of a library-based development system that allows the modeller to describe a model in a high-level neural description environment of his choice and use an automated tool chain to create the appropriate SpiNNaker instantiation. The complete system: universal hardware, automated tool chain, embedded system management, represents the "ideal" neural modelling environment: a general-purpose platform that can generate an arbitrary neural network and run it with hardware speed and scale.
可扩展事件驱动的本地并行处理:SpiNNaker神经模拟系统
神经网络提供了一种与传统顺序数字模型完全不同的计算模型。因此,在传统硬件上对大型网络进行建模,即使不是不可能,也往往效率低下。无论是具有模型限制的专用神经芯片,还是具有可扩展性限制的FPGA实现,都不能提供令人满意的解决方案,即使它们大大提高了仿真性能。SpiNNaker引入了一种不同的方法,即“神经模拟”架构,它在提供类似fpga的通用可配置性的同时,保持了专用芯片的神经优化。这个并行多处理器的核心是一个异步事件驱动模型,它使用芯片上的中断生成专用硬件来支持实时神经仿真。虽然这种架构特别适合于峰值模型,但它也可以有效地实现像MLP这样的“经典”神经模型。尽管如此,事件处理,特别是服务传入数据包,需要仔细和创新的设计,以避免本地处理器拥塞和可能的死锁。使用两个示例模型,一个使用Izhikevich神经元的尖峰网络和一个MLP网络,我们说明了如何实现有效的服务例程来处理输入事件。这些例程构成了“插入式”神经组件库的开端。最终,目标是创建一个基于库的开发系统,允许建模者在他选择的高级神经描述环境中描述模型,并使用自动化工具链来创建适当的SpiNNaker实例化。完整的系统:通用硬件、自动化工具链、嵌入式系统管理,代表了“理想的”神经建模环境:一个通用平台,可以生成任意神经网络,并以硬件速度和规模运行。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信