Converting High-Performance and Low-Latency SNNs Through Explicit Modeling of Residual Error in ANNs

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhipeng Huang;Jianhao Ding;Zhiyu Pan;Haoran Li;Ying Fang;Zhaofei Yu;Jian K. Liu
{"title":"Converting High-Performance and Low-Latency SNNs Through Explicit Modeling of Residual Error in ANNs","authors":"Zhipeng Huang;Jianhao Ding;Zhiyu Pan;Haoran Li;Ying Fang;Zhaofei Yu;Jian K. Liu","doi":"10.1109/TNNLS.2025.3567567","DOIUrl":null,"url":null,"abstract":"Spiking neural networks (SNNs) have garnered interest due to their energy efficiency and superior effectiveness on neuromorphic chips compared with traditional artificial neural networks (ANNs). One of the mainstream approaches to implementing deep SNNs is the ANN–SNN conversion, which integrates the efficient training strategy of ANNs with the energy-saving potential and fast inference capability of SNNs. However, under extremely low-latency conditions, the existing conversion theory suggests that the problem of SNNs’ neurons firing more or fewer spikes within each layer, i.e., residual error, leads to a performance gap in the converted SNNs compared with the original ANNs. This severely limits the possibility of the practical application of SNNs on delay-sensitive edge devices. Existing conversion methods addressing this problem usually involve modifying the state of the conversion spiking neurons. However, these methods do not consider their adaptability and compatibility with neuromorphic chips. We propose a new approach based on explicit modeling of residual errors as additive noise. The noise is incorporated into the activation function of the source ANN, effectively reducing the impact of residual error on SNN performance. Our experiments on the CIFAR10/100 and Tiny-ImageNet datasets verify that our approach exceeds the prevailing ANN–SNN conversion methods and directly trained SNNs concerning accuracy and the required time steps. Overall, our method provides new ideas for improving SNN performance under ultralow-latency conditions and is expected to promote practical neuromorphic hardware applications for further development. The code for our NQ framework is available at <uri>https://github.com/hzp2022/ANN2SNN_NQ</uri>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 9","pages":"16788-16802"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11017686/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Spiking neural networks (SNNs) have garnered interest due to their energy efficiency and superior effectiveness on neuromorphic chips compared with traditional artificial neural networks (ANNs). One of the mainstream approaches to implementing deep SNNs is the ANN–SNN conversion, which integrates the efficient training strategy of ANNs with the energy-saving potential and fast inference capability of SNNs. However, under extremely low-latency conditions, the existing conversion theory suggests that the problem of SNNs’ neurons firing more or fewer spikes within each layer, i.e., residual error, leads to a performance gap in the converted SNNs compared with the original ANNs. This severely limits the possibility of the practical application of SNNs on delay-sensitive edge devices. Existing conversion methods addressing this problem usually involve modifying the state of the conversion spiking neurons. However, these methods do not consider their adaptability and compatibility with neuromorphic chips. We propose a new approach based on explicit modeling of residual errors as additive noise. The noise is incorporated into the activation function of the source ANN, effectively reducing the impact of residual error on SNN performance. Our experiments on the CIFAR10/100 and Tiny-ImageNet datasets verify that our approach exceeds the prevailing ANN–SNN conversion methods and directly trained SNNs concerning accuracy and the required time steps. Overall, our method provides new ideas for improving SNN performance under ultralow-latency conditions and is expected to promote practical neuromorphic hardware applications for further development. The code for our NQ framework is available at https://github.com/hzp2022/ANN2SNN_NQ
通过对人工神经网络残差的显式建模来转换高性能低延迟snn
与传统的人工神经网络(ann)相比,脉冲神经网络(SNNs)由于其能量效率和在神经形态芯片上的优越效果而引起了人们的关注。实现深度snn的主流方法之一是ANN-SNN转换,它将ann的高效训练策略与snn的节能潜力和快速推理能力相结合。然而,在极低延迟条件下,现有的转换理论认为,snn神经元在每层内发射或多或少的峰值的问题,即残差,导致转换后的snn与原始ann相比存在性能差距。这严重限制了snn在延迟敏感边缘器件上实际应用的可能性。解决这一问题的现有转换方法通常涉及修改转换尖峰神经元的状态。然而,这些方法没有考虑到它们与神经形态芯片的适应性和兼容性。我们提出了一种将残差显式建模为加性噪声的新方法。将噪声纳入源神经网络的激活函数中,有效降低残差对SNN性能的影响。我们在CIFAR10/100和Tiny-ImageNet数据集上的实验验证了我们的方法超过了流行的ANN-SNN转换方法,并且在精度和所需的时间步长方面直接训练snn。总的来说,我们的方法为在超低延迟条件下提高SNN性能提供了新的思路,并有望促进实际的神经形态硬件应用的进一步发展。我们的NQ框架的代码可以在https://github.com/hzp2022/ANN2SNN_NQ上获得
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信