{"title":"An Efficient Neural Cell Architecture for Spiking Neural Networks","authors":"Kasem Khalil;Ashok Kumar;Magdy Bayoumi","doi":"10.1109/OJCS.2025.3563423","DOIUrl":null,"url":null,"abstract":"Neurons in a Spiking Neural Network (SNN) communicate using electrical pulses or spikes. They fire or trigger conditionally, and learning is sensitive to such triggers' timing and duration. The Leaky Integrate and Fire (LIF) model is the most widely used SNN neuron model. Most existing LIF-based neurons use a fixed spike frequency, which prevents them from attaining near-optimal accuracy. A research challenge is to design energy and area-efficient SNN neural cells that provide high learning accuracy and are scalable. Recently, the idea of tuning the spiking pulses in SNN was proposed and found promising. This work builds on the pulse-tuning idea by proposing an area and energy-efficient, stable, and reconfigurable SNN cell that generates spikes and reconfigures its pulse width to achieve near-optimal learning. It auto-adapts spike rate and duration to attain near-optimal accuracies for various SNN applications. The proposed cell is designed in mixed-signal, known to be beneficial to SNN, implemented using 45-nm technology, occupies an area of 27 <inline-formula><tex-math>$\\mu {\\rm m}^{2}$</tex-math></inline-formula>, incurs 1.86 <inline-formula><tex-math>$\\mu {\\rm W}$</tex-math></inline-formula>, and yields a high learning performance of 99.12%, 96.37%, and 78.64% in N-MNIST, MNIST, and N-Caltech101 datasets, respectively. The proposed cell attains higher accuracy, scalability, energy, and area economy than the state-of-the-art SNN neurons. Its energy efficiency and compact design make it highly suitable for sensor network applications and embedded systems requiring real-time, low-power neuromorphic computing.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"599-612"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10972324","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10972324/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Neurons in a Spiking Neural Network (SNN) communicate using electrical pulses or spikes. They fire or trigger conditionally, and learning is sensitive to such triggers' timing and duration. The Leaky Integrate and Fire (LIF) model is the most widely used SNN neuron model. Most existing LIF-based neurons use a fixed spike frequency, which prevents them from attaining near-optimal accuracy. A research challenge is to design energy and area-efficient SNN neural cells that provide high learning accuracy and are scalable. Recently, the idea of tuning the spiking pulses in SNN was proposed and found promising. This work builds on the pulse-tuning idea by proposing an area and energy-efficient, stable, and reconfigurable SNN cell that generates spikes and reconfigures its pulse width to achieve near-optimal learning. It auto-adapts spike rate and duration to attain near-optimal accuracies for various SNN applications. The proposed cell is designed in mixed-signal, known to be beneficial to SNN, implemented using 45-nm technology, occupies an area of 27 $\mu {\rm m}^{2}$, incurs 1.86 $\mu {\rm W}$, and yields a high learning performance of 99.12%, 96.37%, and 78.64% in N-MNIST, MNIST, and N-Caltech101 datasets, respectively. The proposed cell attains higher accuracy, scalability, energy, and area economy than the state-of-the-art SNN neurons. Its energy efficiency and compact design make it highly suitable for sensor network applications and embedded systems requiring real-time, low-power neuromorphic computing.