基于图的脉冲神经网络训练时空反向传播

Yulong Yan, Haoming Chu, Xin Chen, Yi Jin, Y. Huan, Lirong Zheng, Zhuo Zou
{"title":"基于图的脉冲神经网络训练时空反向传播","authors":"Yulong Yan, Haoming Chu, Xin Chen, Yi Jin, Y. Huan, Lirong Zheng, Zhuo Zou","doi":"10.1109/AICAS51828.2021.9458461","DOIUrl":null,"url":null,"abstract":"Dedicated hardware for spiking neural networks (SNN) reduces energy consumption with spike-driven computing. This paper proposes a graph-based spatio-temporal backpropagation (G-STBP) to train SNN, aiming to enhance spike sparsity for energy efficiency, while ensuring the accuracy. A differentiable leaky integrate-and-fire (LIF) model is suggested to establish the backpropagation path. The sparse regularization is proposed to reduce the spike firing rate with a guaranteed accuracy. GSTBP enables training in any network topologies thanks to graph representation. A recurrent network is demonstrated with spike-sparse rank order coding. The experimental result on rank order coded MNIST shows that the recurrent SNN trained by G-STBP achieves the accuracy of 97.3% using 392 spikes per inference.","PeriodicalId":173204,"journal":{"name":"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"55 35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Graph-Based Spatio-Temporal Backpropagation for Training Spiking Neural Networks\",\"authors\":\"Yulong Yan, Haoming Chu, Xin Chen, Yi Jin, Y. Huan, Lirong Zheng, Zhuo Zou\",\"doi\":\"10.1109/AICAS51828.2021.9458461\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dedicated hardware for spiking neural networks (SNN) reduces energy consumption with spike-driven computing. This paper proposes a graph-based spatio-temporal backpropagation (G-STBP) to train SNN, aiming to enhance spike sparsity for energy efficiency, while ensuring the accuracy. A differentiable leaky integrate-and-fire (LIF) model is suggested to establish the backpropagation path. The sparse regularization is proposed to reduce the spike firing rate with a guaranteed accuracy. GSTBP enables training in any network topologies thanks to graph representation. A recurrent network is demonstrated with spike-sparse rank order coding. The experimental result on rank order coded MNIST shows that the recurrent SNN trained by G-STBP achieves the accuracy of 97.3% using 392 spikes per inference.\",\"PeriodicalId\":173204,\"journal\":{\"name\":\"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"volume\":\"55 35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AICAS51828.2021.9458461\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS51828.2021.9458461","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

针对尖峰神经网络(SNN)的专用硬件通过尖峰驱动计算降低了能耗。本文提出了一种基于图的时空反向传播(G-STBP)来训练SNN,旨在提高峰值稀疏度以提高能量效率,同时保证准确性。提出了一个可微漏积火(LIF)模型来建立反向传播路径。提出了稀疏正则化方法,在保证精度的前提下降低脉冲发射率。由于图形表示,GSTBP可以在任何网络拓扑中进行训练。用尖峰稀疏秩序编码证明了一个循环网络。在秩序编码MNIST上的实验结果表明,G-STBP训练的循环SNN在每次推理392个尖峰时准确率达到97.3%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Graph-Based Spatio-Temporal Backpropagation for Training Spiking Neural Networks
Dedicated hardware for spiking neural networks (SNN) reduces energy consumption with spike-driven computing. This paper proposes a graph-based spatio-temporal backpropagation (G-STBP) to train SNN, aiming to enhance spike sparsity for energy efficiency, while ensuring the accuracy. A differentiable leaky integrate-and-fire (LIF) model is suggested to establish the backpropagation path. The sparse regularization is proposed to reduce the spike firing rate with a guaranteed accuracy. GSTBP enables training in any network topologies thanks to graph representation. A recurrent network is demonstrated with spike-sparse rank order coding. The experimental result on rank order coded MNIST shows that the recurrent SNN trained by G-STBP achieves the accuracy of 97.3% using 392 spikes per inference.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信