An Efficient Neural Cell Architecture for Spiking Neural Networks

Kasem Khalil;Ashok Kumar;Magdy Bayoumi
{"title":"An Efficient Neural Cell Architecture for Spiking Neural Networks","authors":"Kasem Khalil;Ashok Kumar;Magdy Bayoumi","doi":"10.1109/OJCS.2025.3563423","DOIUrl":null,"url":null,"abstract":"Neurons in a Spiking Neural Network (SNN) communicate using electrical pulses or spikes. They fire or trigger conditionally, and learning is sensitive to such triggers' timing and duration. The Leaky Integrate and Fire (LIF) model is the most widely used SNN neuron model. Most existing LIF-based neurons use a fixed spike frequency, which prevents them from attaining near-optimal accuracy. A research challenge is to design energy and area-efficient SNN neural cells that provide high learning accuracy and are scalable. Recently, the idea of tuning the spiking pulses in SNN was proposed and found promising. This work builds on the pulse-tuning idea by proposing an area and energy-efficient, stable, and reconfigurable SNN cell that generates spikes and reconfigures its pulse width to achieve near-optimal learning. It auto-adapts spike rate and duration to attain near-optimal accuracies for various SNN applications. The proposed cell is designed in mixed-signal, known to be beneficial to SNN, implemented using 45-nm technology, occupies an area of 27 <inline-formula><tex-math>$\\mu {\\rm m}^{2}$</tex-math></inline-formula>, incurs 1.86 <inline-formula><tex-math>$\\mu {\\rm W}$</tex-math></inline-formula>, and yields a high learning performance of 99.12%, 96.37%, and 78.64% in N-MNIST, MNIST, and N-Caltech101 datasets, respectively. The proposed cell attains higher accuracy, scalability, energy, and area economy than the state-of-the-art SNN neurons. Its energy efficiency and compact design make it highly suitable for sensor network applications and embedded systems requiring real-time, low-power neuromorphic computing.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"599-612"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10972324","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10972324/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Neurons in a Spiking Neural Network (SNN) communicate using electrical pulses or spikes. They fire or trigger conditionally, and learning is sensitive to such triggers' timing and duration. The Leaky Integrate and Fire (LIF) model is the most widely used SNN neuron model. Most existing LIF-based neurons use a fixed spike frequency, which prevents them from attaining near-optimal accuracy. A research challenge is to design energy and area-efficient SNN neural cells that provide high learning accuracy and are scalable. Recently, the idea of tuning the spiking pulses in SNN was proposed and found promising. This work builds on the pulse-tuning idea by proposing an area and energy-efficient, stable, and reconfigurable SNN cell that generates spikes and reconfigures its pulse width to achieve near-optimal learning. It auto-adapts spike rate and duration to attain near-optimal accuracies for various SNN applications. The proposed cell is designed in mixed-signal, known to be beneficial to SNN, implemented using 45-nm technology, occupies an area of 27 $\mu {\rm m}^{2}$, incurs 1.86 $\mu {\rm W}$, and yields a high learning performance of 99.12%, 96.37%, and 78.64% in N-MNIST, MNIST, and N-Caltech101 datasets, respectively. The proposed cell attains higher accuracy, scalability, energy, and area economy than the state-of-the-art SNN neurons. Its energy efficiency and compact design make it highly suitable for sensor network applications and embedded systems requiring real-time, low-power neuromorphic computing.
一种用于脉冲神经网络的高效神经细胞结构
尖峰神经网络(SNN)中的神经元使用电脉冲或尖峰进行通信。它们有条件地激发或触发,学习对这些触发的时间和持续时间很敏感。Leaky Integrate and Fire (LIF)模型是目前应用最广泛的SNN神经元模型。大多数现有的基于liff的神经元使用固定的尖峰频率,这使得它们无法达到接近最佳的精度。一个研究挑战是设计能量和面积高效的SNN神经细胞,提供高学习精度和可扩展。近年来,人们提出了对SNN中尖峰脉冲进行调谐的想法,并发现这一想法很有前景。这项工作建立在脉冲调谐思想的基础上,提出了一个面积和节能、稳定和可重构的SNN单元,该单元产生尖峰并重新配置其脉冲宽度,以实现近乎最佳的学习。它可以自动调整尖峰速率和持续时间,以达到各种SNN应用的接近最佳精度。所提出的单元采用混合信号设计,已知有利于SNN,采用45纳米技术实现,占地27 $\mu {\rm m}^{2}$,产生1.86 $\mu {\rm W}$,在N-MNIST, MNIST和N-Caltech101数据集上分别获得99.12%,96.37%和78.64%的高学习性能。与最先进的SNN神经元相比,所提出的单元具有更高的准确性、可扩展性、能量和区域经济性。它的能源效率和紧凑的设计使其非常适合传感器网络应用和嵌入式系统需要实时,低功耗的神经形态计算。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
12.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信