SpikeAtConv: an integrated spiking-convolutional attention architecture for energy-efficient neuromorphic vision processing.

IF 3.2 3区 医学 Q2 NEUROSCIENCES
Frontiers in Neuroscience Pub Date : 2025-03-12 eCollection Date: 2025-01-01 DOI:10.3389/fnins.2025.1536771
Wangdan Liao, Fei Chen, Changyue Liu, Weidong Wang, Hongyun Liu
{"title":"SpikeAtConv: an integrated spiking-convolutional attention architecture for energy-efficient neuromorphic vision processing.","authors":"Wangdan Liao, Fei Chen, Changyue Liu, Weidong Wang, Hongyun Liu","doi":"10.3389/fnins.2025.1536771","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Spiking Neural Networks (SNNs) offer a biologically inspired alternative to conventional artificial neural networks, with potential advantages in power efficiency due to their event-driven computation. Despite their promise, SNNs have yet to achieve competitive performance on complex visual tasks, such as image classification.</p><p><strong>Methods: </strong>This study introduces a novel SNN architecture called SpikeAtConv, designed to enhance computational efficacy and task accuracy. The architecture features optimized spiking modules that facilitate the processing of spatio-temporal patterns in visual data, aiming to reconcile the computational demands of high-level vision tasks with the energy-efficient processing of SNNs.</p><p><strong>Results: </strong>Extensive experiments show that the proposed SpikeAtConv architecture outperforms or is comparable to the state-of-the-art SNNs on the datasets. Notably, we achieved a top-1 accuracy of 81.23% on ImageNet-1K using the directly trained Large SpikeAtConv, which is a state-of-the-art result in the field of SNN.</p><p><strong>Discussion: </strong>Our evaluations on standard image classification benchmarks indicate that the proposed architecture narrows the performance gap with traditional neural networks, providing insights into the design of more efficient and capable neuromorphic computing systems.</p>","PeriodicalId":12639,"journal":{"name":"Frontiers in Neuroscience","volume":"19 ","pages":"1536771"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11936907/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fnins.2025.1536771","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Spiking Neural Networks (SNNs) offer a biologically inspired alternative to conventional artificial neural networks, with potential advantages in power efficiency due to their event-driven computation. Despite their promise, SNNs have yet to achieve competitive performance on complex visual tasks, such as image classification.

Methods: This study introduces a novel SNN architecture called SpikeAtConv, designed to enhance computational efficacy and task accuracy. The architecture features optimized spiking modules that facilitate the processing of spatio-temporal patterns in visual data, aiming to reconcile the computational demands of high-level vision tasks with the energy-efficient processing of SNNs.

Results: Extensive experiments show that the proposed SpikeAtConv architecture outperforms or is comparable to the state-of-the-art SNNs on the datasets. Notably, we achieved a top-1 accuracy of 81.23% on ImageNet-1K using the directly trained Large SpikeAtConv, which is a state-of-the-art result in the field of SNN.

Discussion: Our evaluations on standard image classification benchmarks indicate that the proposed architecture narrows the performance gap with traditional neural networks, providing insights into the design of more efficient and capable neuromorphic computing systems.

SpikeAtConv:用于节能神经形态视觉处理的集成SpikeAtConv -卷积注意架构。
简介:脉冲神经网络(snn)是传统人工神经网络的一种生物启发替代方案,由于其事件驱动计算,在功率效率方面具有潜在优势。尽管snn很有前途,但在复杂的视觉任务(如图像分类)上还没有达到有竞争力的表现。方法:本研究引入了一种新的SNN架构SpikeAtConv,旨在提高计算效率和任务准确性。该架构采用优化的尖峰模块,促进视觉数据的时空模式处理,旨在协调高水平视觉任务的计算需求与snn的节能处理。结果:大量实验表明,所提出的SpikeAtConv架构在数据集上优于或可与最先进的snn相媲美。值得注意的是,我们使用直接训练的Large SpikeAtConv在ImageNet-1K上实现了81.23%的top-1准确率,这是SNN领域最先进的结果。讨论:我们对标准图像分类基准的评估表明,所提出的架构缩小了与传统神经网络的性能差距,为设计更高效、更有能力的神经形态计算系统提供了见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Frontiers in Neuroscience
Frontiers in Neuroscience NEUROSCIENCES-
CiteScore
6.20
自引率
4.70%
发文量
2070
审稿时长
14 weeks
期刊介绍: Neural Technology is devoted to the convergence between neurobiology and quantum-, nano- and micro-sciences. In our vision, this interdisciplinary approach should go beyond the technological development of sophisticated methods and should contribute in generating a genuine change in our discipline.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信