Ying-Chao Cheng , Wang-Xin Hu , Yu-Lin He , Joshua Zhexue Huang
{"title":"脉冲神经网络神经形态训练框架的综合多模态基准","authors":"Ying-Chao Cheng , Wang-Xin Hu , Yu-Lin He , Joshua Zhexue Huang","doi":"10.1016/j.engappai.2025.111543","DOIUrl":null,"url":null,"abstract":"<div><div>Spiking neural networks (SNNs) represent a promising paradigm for energy-efficient, event-driven artificial intelligence, owing to their biological plausibility and unique temporal processing capabilities. Despite the rapid growth of neuromorphic training frameworks, the lack of standardized benchmarks hinders both the effective comparison of these tools and the broader advancement of SNN-based solutions for real-world applications. In this work, we address this critical gap by conducting a comprehensive, multimodal benchmark of five leading SNN frameworks—SpikingJelly, BrainCog, Sinabs, SNNGrow, and Lava. Our evaluation system integrates quantitative performance metrics – including accuracy, latency, energy consumption, and noise immunity – across diverse datasets (image, text, and neuromorphic event data), along with qualitative assessments of framework adaptability, model complexity, neuromorphic features, and community engagement. Our results indicate that SpikingJelly excels in overall performance, particularly in energy efficiency, while BrainCog demonstrates robust performance on complex tasks. Sinabs and SNNGrow offer balanced performance in latency and stability, though SNNGrow shows limitations in advanced training support and neuromorphic features, and Lava appears less adaptable to large-scale datasets. Additionally, we investigate the effects of varying time steps, training methods, and data encoding strategies on performance. This benchmark not only provides actionable guidance for selecting and optimizing SNN solutions but also lays the foundation for future research on advanced architectures and training techniques, ultimately accelerating the adoption of energy-efficient, brain-inspired computing in practical artificial intelligence engineering.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"159 ","pages":"Article 111543"},"PeriodicalIF":7.5000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A comprehensive multimodal benchmark of neuromorphic training frameworks for spiking neural networks\",\"authors\":\"Ying-Chao Cheng , Wang-Xin Hu , Yu-Lin He , Joshua Zhexue Huang\",\"doi\":\"10.1016/j.engappai.2025.111543\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Spiking neural networks (SNNs) represent a promising paradigm for energy-efficient, event-driven artificial intelligence, owing to their biological plausibility and unique temporal processing capabilities. Despite the rapid growth of neuromorphic training frameworks, the lack of standardized benchmarks hinders both the effective comparison of these tools and the broader advancement of SNN-based solutions for real-world applications. In this work, we address this critical gap by conducting a comprehensive, multimodal benchmark of five leading SNN frameworks—SpikingJelly, BrainCog, Sinabs, SNNGrow, and Lava. Our evaluation system integrates quantitative performance metrics – including accuracy, latency, energy consumption, and noise immunity – across diverse datasets (image, text, and neuromorphic event data), along with qualitative assessments of framework adaptability, model complexity, neuromorphic features, and community engagement. Our results indicate that SpikingJelly excels in overall performance, particularly in energy efficiency, while BrainCog demonstrates robust performance on complex tasks. Sinabs and SNNGrow offer balanced performance in latency and stability, though SNNGrow shows limitations in advanced training support and neuromorphic features, and Lava appears less adaptable to large-scale datasets. Additionally, we investigate the effects of varying time steps, training methods, and data encoding strategies on performance. This benchmark not only provides actionable guidance for selecting and optimizing SNN solutions but also lays the foundation for future research on advanced architectures and training techniques, ultimately accelerating the adoption of energy-efficient, brain-inspired computing in practical artificial intelligence engineering.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"159 \",\"pages\":\"Article 111543\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625015453\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625015453","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
A comprehensive multimodal benchmark of neuromorphic training frameworks for spiking neural networks
Spiking neural networks (SNNs) represent a promising paradigm for energy-efficient, event-driven artificial intelligence, owing to their biological plausibility and unique temporal processing capabilities. Despite the rapid growth of neuromorphic training frameworks, the lack of standardized benchmarks hinders both the effective comparison of these tools and the broader advancement of SNN-based solutions for real-world applications. In this work, we address this critical gap by conducting a comprehensive, multimodal benchmark of five leading SNN frameworks—SpikingJelly, BrainCog, Sinabs, SNNGrow, and Lava. Our evaluation system integrates quantitative performance metrics – including accuracy, latency, energy consumption, and noise immunity – across diverse datasets (image, text, and neuromorphic event data), along with qualitative assessments of framework adaptability, model complexity, neuromorphic features, and community engagement. Our results indicate that SpikingJelly excels in overall performance, particularly in energy efficiency, while BrainCog demonstrates robust performance on complex tasks. Sinabs and SNNGrow offer balanced performance in latency and stability, though SNNGrow shows limitations in advanced training support and neuromorphic features, and Lava appears less adaptable to large-scale datasets. Additionally, we investigate the effects of varying time steps, training methods, and data encoding strategies on performance. This benchmark not only provides actionable guidance for selecting and optimizing SNN solutions but also lays the foundation for future research on advanced architectures and training techniques, ultimately accelerating the adoption of energy-efficient, brain-inspired computing in practical artificial intelligence engineering.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.