脉冲神经网络神经形态训练框架的综合多模态基准

IF 7.5 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Ying-Chao Cheng , Wang-Xin Hu , Yu-Lin He , Joshua Zhexue Huang
{"title":"脉冲神经网络神经形态训练框架的综合多模态基准","authors":"Ying-Chao Cheng ,&nbsp;Wang-Xin Hu ,&nbsp;Yu-Lin He ,&nbsp;Joshua Zhexue Huang","doi":"10.1016/j.engappai.2025.111543","DOIUrl":null,"url":null,"abstract":"<div><div>Spiking neural networks (SNNs) represent a promising paradigm for energy-efficient, event-driven artificial intelligence, owing to their biological plausibility and unique temporal processing capabilities. Despite the rapid growth of neuromorphic training frameworks, the lack of standardized benchmarks hinders both the effective comparison of these tools and the broader advancement of SNN-based solutions for real-world applications. In this work, we address this critical gap by conducting a comprehensive, multimodal benchmark of five leading SNN frameworks—SpikingJelly, BrainCog, Sinabs, SNNGrow, and Lava. Our evaluation system integrates quantitative performance metrics – including accuracy, latency, energy consumption, and noise immunity – across diverse datasets (image, text, and neuromorphic event data), along with qualitative assessments of framework adaptability, model complexity, neuromorphic features, and community engagement. Our results indicate that SpikingJelly excels in overall performance, particularly in energy efficiency, while BrainCog demonstrates robust performance on complex tasks. Sinabs and SNNGrow offer balanced performance in latency and stability, though SNNGrow shows limitations in advanced training support and neuromorphic features, and Lava appears less adaptable to large-scale datasets. Additionally, we investigate the effects of varying time steps, training methods, and data encoding strategies on performance. This benchmark not only provides actionable guidance for selecting and optimizing SNN solutions but also lays the foundation for future research on advanced architectures and training techniques, ultimately accelerating the adoption of energy-efficient, brain-inspired computing in practical artificial intelligence engineering.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"159 ","pages":"Article 111543"},"PeriodicalIF":7.5000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A comprehensive multimodal benchmark of neuromorphic training frameworks for spiking neural networks\",\"authors\":\"Ying-Chao Cheng ,&nbsp;Wang-Xin Hu ,&nbsp;Yu-Lin He ,&nbsp;Joshua Zhexue Huang\",\"doi\":\"10.1016/j.engappai.2025.111543\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Spiking neural networks (SNNs) represent a promising paradigm for energy-efficient, event-driven artificial intelligence, owing to their biological plausibility and unique temporal processing capabilities. Despite the rapid growth of neuromorphic training frameworks, the lack of standardized benchmarks hinders both the effective comparison of these tools and the broader advancement of SNN-based solutions for real-world applications. In this work, we address this critical gap by conducting a comprehensive, multimodal benchmark of five leading SNN frameworks—SpikingJelly, BrainCog, Sinabs, SNNGrow, and Lava. Our evaluation system integrates quantitative performance metrics – including accuracy, latency, energy consumption, and noise immunity – across diverse datasets (image, text, and neuromorphic event data), along with qualitative assessments of framework adaptability, model complexity, neuromorphic features, and community engagement. Our results indicate that SpikingJelly excels in overall performance, particularly in energy efficiency, while BrainCog demonstrates robust performance on complex tasks. Sinabs and SNNGrow offer balanced performance in latency and stability, though SNNGrow shows limitations in advanced training support and neuromorphic features, and Lava appears less adaptable to large-scale datasets. Additionally, we investigate the effects of varying time steps, training methods, and data encoding strategies on performance. This benchmark not only provides actionable guidance for selecting and optimizing SNN solutions but also lays the foundation for future research on advanced architectures and training techniques, ultimately accelerating the adoption of energy-efficient, brain-inspired computing in practical artificial intelligence engineering.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"159 \",\"pages\":\"Article 111543\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625015453\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625015453","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

由于其生物合理性和独特的时间处理能力,峰值神经网络(snn)代表了一种有前途的节能、事件驱动的人工智能范式。尽管神经形态训练框架发展迅速,但缺乏标准化的基准,既阻碍了这些工具的有效比较,也阻碍了基于snn的解决方案在现实应用中的更广泛发展。在这项工作中,我们通过对五个领先的SNN框架(spikingjelly、BrainCog、Sinabs、SNNGrow和Lava)进行全面的多模式基准测试来解决这一关键差距。我们的评估系统集成了定量性能指标-包括准确性,延迟,能耗和抗噪性-跨越不同的数据集(图像,文本和神经形态事件数据),以及框架适应性,模型复杂性,神经形态特征和社区参与的定性评估。我们的研究结果表明,SpikingJelly在整体性能上表现出色,尤其是在能源效率方面,而BrainCog在复杂任务上表现出色。Sinabs和SNNGrow在延迟和稳定性方面提供了平衡的性能,尽管SNNGrow在高级训练支持和神经形态特征方面存在局限性,而Lava似乎不太适应大规模数据集。此外,我们还研究了不同的时间步长、训练方法和数据编码策略对性能的影响。该基准不仅为选择和优化SNN解决方案提供了可操作的指导,而且为未来先进架构和训练技术的研究奠定了基础,最终加速了节能,大脑启发计算在实际人工智能工程中的采用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A comprehensive multimodal benchmark of neuromorphic training frameworks for spiking neural networks
Spiking neural networks (SNNs) represent a promising paradigm for energy-efficient, event-driven artificial intelligence, owing to their biological plausibility and unique temporal processing capabilities. Despite the rapid growth of neuromorphic training frameworks, the lack of standardized benchmarks hinders both the effective comparison of these tools and the broader advancement of SNN-based solutions for real-world applications. In this work, we address this critical gap by conducting a comprehensive, multimodal benchmark of five leading SNN frameworks—SpikingJelly, BrainCog, Sinabs, SNNGrow, and Lava. Our evaluation system integrates quantitative performance metrics – including accuracy, latency, energy consumption, and noise immunity – across diverse datasets (image, text, and neuromorphic event data), along with qualitative assessments of framework adaptability, model complexity, neuromorphic features, and community engagement. Our results indicate that SpikingJelly excels in overall performance, particularly in energy efficiency, while BrainCog demonstrates robust performance on complex tasks. Sinabs and SNNGrow offer balanced performance in latency and stability, though SNNGrow shows limitations in advanced training support and neuromorphic features, and Lava appears less adaptable to large-scale datasets. Additionally, we investigate the effects of varying time steps, training methods, and data encoding strategies on performance. This benchmark not only provides actionable guidance for selecting and optimizing SNN solutions but also lays the foundation for future research on advanced architectures and training techniques, ultimately accelerating the adoption of energy-efficient, brain-inspired computing in practical artificial intelligence engineering.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 工程技术-工程:电子与电气
CiteScore
9.60
自引率
10.00%
发文量
505
审稿时长
68 days
期刊介绍: Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信