Ping He, Rong Xiao, Chenwei Tang, Shudong Huang, Jiancheng Lv
{"title":"An empirical study on optimizing binary spiking neural networks for neuromorphic computing","authors":"Ping He, Rong Xiao, Chenwei Tang, Shudong Huang, Jiancheng Lv","doi":"10.1016/j.asoc.2025.113471","DOIUrl":null,"url":null,"abstract":"<div><div>Spiking neural networks process information using asynchronous spikes between neurons, making them ideal for handling spatiotemporal data from neuromorphic sensors and have shown superior performance in low-latency and low-power computing applications. However, current neuromorphic computing faces challenges such as high synaptic memory usage and complex synapse calculations, with the training process optimization lacking a solid foundation. Here, we present a hardware-friendly weight-binarized spiking neural network to reduce storage needs, accelerate optimization, and enhance computational efficiency. During training, weight binarization is applied to reduce memory size and access drastically. Meanwhile, we employ a hybrid optimizer that combines the Adam method with stochastic gradient descent to address the convergence challenges that arise from gradient sparsity due to the use of binary weights. During inference, a simple shift-based batch normalization algorithm is introduced to achieve the effect equivalent to the computationally expensive BN with low accuracy loss. Then, we empirically identify and study the effectiveness of various ad-hoc techniques on neuromorphic recognition tasks as a case study, providing best practices for optimization. To the best of our knowledge, this is the first work using systematic comparisons to reveal how commonly employed tricks are effective for training binary spiking neural networks. The implementations will be open-sourced on GitHub.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"181 ","pages":"Article 113471"},"PeriodicalIF":7.2000,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625007823","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Spiking neural networks process information using asynchronous spikes between neurons, making them ideal for handling spatiotemporal data from neuromorphic sensors and have shown superior performance in low-latency and low-power computing applications. However, current neuromorphic computing faces challenges such as high synaptic memory usage and complex synapse calculations, with the training process optimization lacking a solid foundation. Here, we present a hardware-friendly weight-binarized spiking neural network to reduce storage needs, accelerate optimization, and enhance computational efficiency. During training, weight binarization is applied to reduce memory size and access drastically. Meanwhile, we employ a hybrid optimizer that combines the Adam method with stochastic gradient descent to address the convergence challenges that arise from gradient sparsity due to the use of binary weights. During inference, a simple shift-based batch normalization algorithm is introduced to achieve the effect equivalent to the computationally expensive BN with low accuracy loss. Then, we empirically identify and study the effectiveness of various ad-hoc techniques on neuromorphic recognition tasks as a case study, providing best practices for optimization. To the best of our knowledge, this is the first work using systematic comparisons to reveal how commonly employed tricks are effective for training binary spiking neural networks. The implementations will be open-sourced on GitHub.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.