{"title":"Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications","authors":"Tong Bu, Maohua Li, Zhaofei Yu","doi":"arxiv-2409.03368","DOIUrl":null,"url":null,"abstract":"Spiking Neural Networks (SNNs) have emerged as a promising substitute for\nArtificial Neural Networks (ANNs) due to their advantages of fast inference and\nlow power consumption. However, the lack of efficient training algorithms has\nhindered their widespread adoption. Existing supervised learning algorithms for\nSNNs require significantly more memory and time than their ANN counterparts.\nEven commonly used ANN-SNN conversion methods necessitate re-training of ANNs\nto enhance conversion efficiency, incurring additional computational costs. To\naddress these challenges, we propose a novel training-free ANN-SNN conversion\npipeline. Our approach directly converts pre-trained ANN models into\nhigh-performance SNNs without additional training. The conversion pipeline\nincludes a local-learning-based threshold balancing algorithm, which enables\nefficient calculation of the optimal thresholds and fine-grained adjustment of\nthreshold value by channel-wise scaling. We demonstrate the scalability of our\nframework across three typical computer vision tasks: image classification,\nsemantic segmentation, and object detection. This showcases its applicability\nto both classification and regression tasks. Moreover, we have evaluated the\nenergy consumption of the converted SNNs, demonstrating their superior\nlow-power advantage compared to conventional ANNs. Our training-free algorithm\noutperforms existing methods, highlighting its practical applicability and\nefficiency. This approach simplifies the deployment of SNNs by leveraging\nopen-source pre-trained ANN models and neuromorphic hardware, enabling fast,\nlow-power inference with negligible performance reduction.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03368","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Spiking Neural Networks (SNNs) have emerged as a promising substitute for
Artificial Neural Networks (ANNs) due to their advantages of fast inference and
low power consumption. However, the lack of efficient training algorithms has
hindered their widespread adoption. Existing supervised learning algorithms for
SNNs require significantly more memory and time than their ANN counterparts.
Even commonly used ANN-SNN conversion methods necessitate re-training of ANNs
to enhance conversion efficiency, incurring additional computational costs. To
address these challenges, we propose a novel training-free ANN-SNN conversion
pipeline. Our approach directly converts pre-trained ANN models into
high-performance SNNs without additional training. The conversion pipeline
includes a local-learning-based threshold balancing algorithm, which enables
efficient calculation of the optimal thresholds and fine-grained adjustment of
threshold value by channel-wise scaling. We demonstrate the scalability of our
framework across three typical computer vision tasks: image classification,
semantic segmentation, and object detection. This showcases its applicability
to both classification and regression tasks. Moreover, we have evaluated the
energy consumption of the converted SNNs, demonstrating their superior
low-power advantage compared to conventional ANNs. Our training-free algorithm
outperforms existing methods, highlighting its practical applicability and
efficiency. This approach simplifies the deployment of SNNs by leveraging
open-source pre-trained ANN models and neuromorphic hardware, enabling fast,
low-power inference with negligible performance reduction.