免训练将预训练 ANN 转换为 SNN,以实现低功耗和高性能应用

Tong Bu, Maohua Li, Zhaofei Yu
{"title":"免训练将预训练 ANN 转换为 SNN,以实现低功耗和高性能应用","authors":"Tong Bu, Maohua Li, Zhaofei Yu","doi":"arxiv-2409.03368","DOIUrl":null,"url":null,"abstract":"Spiking Neural Networks (SNNs) have emerged as a promising substitute for\nArtificial Neural Networks (ANNs) due to their advantages of fast inference and\nlow power consumption. However, the lack of efficient training algorithms has\nhindered their widespread adoption. Existing supervised learning algorithms for\nSNNs require significantly more memory and time than their ANN counterparts.\nEven commonly used ANN-SNN conversion methods necessitate re-training of ANNs\nto enhance conversion efficiency, incurring additional computational costs. To\naddress these challenges, we propose a novel training-free ANN-SNN conversion\npipeline. Our approach directly converts pre-trained ANN models into\nhigh-performance SNNs without additional training. The conversion pipeline\nincludes a local-learning-based threshold balancing algorithm, which enables\nefficient calculation of the optimal thresholds and fine-grained adjustment of\nthreshold value by channel-wise scaling. We demonstrate the scalability of our\nframework across three typical computer vision tasks: image classification,\nsemantic segmentation, and object detection. This showcases its applicability\nto both classification and regression tasks. Moreover, we have evaluated the\nenergy consumption of the converted SNNs, demonstrating their superior\nlow-power advantage compared to conventional ANNs. Our training-free algorithm\noutperforms existing methods, highlighting its practical applicability and\nefficiency. This approach simplifies the deployment of SNNs by leveraging\nopen-source pre-trained ANN models and neuromorphic hardware, enabling fast,\nlow-power inference with negligible performance reduction.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications\",\"authors\":\"Tong Bu, Maohua Li, Zhaofei Yu\",\"doi\":\"arxiv-2409.03368\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Spiking Neural Networks (SNNs) have emerged as a promising substitute for\\nArtificial Neural Networks (ANNs) due to their advantages of fast inference and\\nlow power consumption. However, the lack of efficient training algorithms has\\nhindered their widespread adoption. Existing supervised learning algorithms for\\nSNNs require significantly more memory and time than their ANN counterparts.\\nEven commonly used ANN-SNN conversion methods necessitate re-training of ANNs\\nto enhance conversion efficiency, incurring additional computational costs. To\\naddress these challenges, we propose a novel training-free ANN-SNN conversion\\npipeline. Our approach directly converts pre-trained ANN models into\\nhigh-performance SNNs without additional training. The conversion pipeline\\nincludes a local-learning-based threshold balancing algorithm, which enables\\nefficient calculation of the optimal thresholds and fine-grained adjustment of\\nthreshold value by channel-wise scaling. We demonstrate the scalability of our\\nframework across three typical computer vision tasks: image classification,\\nsemantic segmentation, and object detection. This showcases its applicability\\nto both classification and regression tasks. Moreover, we have evaluated the\\nenergy consumption of the converted SNNs, demonstrating their superior\\nlow-power advantage compared to conventional ANNs. Our training-free algorithm\\noutperforms existing methods, highlighting its practical applicability and\\nefficiency. This approach simplifies the deployment of SNNs by leveraging\\nopen-source pre-trained ANN models and neuromorphic hardware, enabling fast,\\nlow-power inference with negligible performance reduction.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.03368\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03368","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

尖峰神经网络(SNN)具有推理速度快、功耗低等优点,因此有望取代人工神经网络(ANN)。然而,高效训练算法的缺乏阻碍了其广泛应用。即使是常用的 ANN-SNN 转换方法,也需要重新训练 ANNN 以提高转换效率,从而产生额外的计算成本。为了应对这些挑战,我们提出了一种新型免训练 ANN-SNN 转换管道。我们的方法可直接将预先训练好的 ANN 模型转换为高性能 SNN,无需额外训练。转换管道包括基于本地学习的阈值平衡算法,该算法可以高效计算最佳阈值,并通过信道缩放对阈值进行细粒度调整。我们在三个典型的计算机视觉任务中展示了我们框架的可扩展性:图像分类、语义分割和物体检测。这展示了它对分类和回归任务的适用性。此外,我们还对转换后的 SNN 的能耗进行了评估,证明与传统 ANN 相比,SNN 具有更低功耗的优势。我们的免训练算法优于现有方法,凸显了其实用性和高效性。这种方法通过利用开源预训练 ANN 模型和神经形态硬件,简化了 SNN 的部署,实现了快速、低功耗推理,性能降低可忽略不计。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications
Spiking Neural Networks (SNNs) have emerged as a promising substitute for Artificial Neural Networks (ANNs) due to their advantages of fast inference and low power consumption. However, the lack of efficient training algorithms has hindered their widespread adoption. Existing supervised learning algorithms for SNNs require significantly more memory and time than their ANN counterparts. Even commonly used ANN-SNN conversion methods necessitate re-training of ANNs to enhance conversion efficiency, incurring additional computational costs. To address these challenges, we propose a novel training-free ANN-SNN conversion pipeline. Our approach directly converts pre-trained ANN models into high-performance SNNs without additional training. The conversion pipeline includes a local-learning-based threshold balancing algorithm, which enables efficient calculation of the optimal thresholds and fine-grained adjustment of threshold value by channel-wise scaling. We demonstrate the scalability of our framework across three typical computer vision tasks: image classification, semantic segmentation, and object detection. This showcases its applicability to both classification and regression tasks. Moreover, we have evaluated the energy consumption of the converted SNNs, demonstrating their superior low-power advantage compared to conventional ANNs. Our training-free algorithm outperforms existing methods, highlighting its practical applicability and efficiency. This approach simplifies the deployment of SNNs by leveraging open-source pre-trained ANN models and neuromorphic hardware, enabling fast, low-power inference with negligible performance reduction.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信