{"title":"一种用于脉冲神经网络的高效神经细胞结构","authors":"Kasem Khalil;Ashok Kumar;Magdy Bayoumi","doi":"10.1109/OJCS.2025.3563423","DOIUrl":null,"url":null,"abstract":"Neurons in a Spiking Neural Network (SNN) communicate using electrical pulses or spikes. They fire or trigger conditionally, and learning is sensitive to such triggers' timing and duration. The Leaky Integrate and Fire (LIF) model is the most widely used SNN neuron model. Most existing LIF-based neurons use a fixed spike frequency, which prevents them from attaining near-optimal accuracy. A research challenge is to design energy and area-efficient SNN neural cells that provide high learning accuracy and are scalable. Recently, the idea of tuning the spiking pulses in SNN was proposed and found promising. This work builds on the pulse-tuning idea by proposing an area and energy-efficient, stable, and reconfigurable SNN cell that generates spikes and reconfigures its pulse width to achieve near-optimal learning. It auto-adapts spike rate and duration to attain near-optimal accuracies for various SNN applications. The proposed cell is designed in mixed-signal, known to be beneficial to SNN, implemented using 45-nm technology, occupies an area of 27 <inline-formula><tex-math>$\\mu {\\rm m}^{2}$</tex-math></inline-formula>, incurs 1.86 <inline-formula><tex-math>$\\mu {\\rm W}$</tex-math></inline-formula>, and yields a high learning performance of 99.12%, 96.37%, and 78.64% in N-MNIST, MNIST, and N-Caltech101 datasets, respectively. The proposed cell attains higher accuracy, scalability, energy, and area economy than the state-of-the-art SNN neurons. Its energy efficiency and compact design make it highly suitable for sensor network applications and embedded systems requiring real-time, low-power neuromorphic computing.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"599-612"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10972324","citationCount":"0","resultStr":"{\"title\":\"An Efficient Neural Cell Architecture for Spiking Neural Networks\",\"authors\":\"Kasem Khalil;Ashok Kumar;Magdy Bayoumi\",\"doi\":\"10.1109/OJCS.2025.3563423\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neurons in a Spiking Neural Network (SNN) communicate using electrical pulses or spikes. They fire or trigger conditionally, and learning is sensitive to such triggers' timing and duration. The Leaky Integrate and Fire (LIF) model is the most widely used SNN neuron model. Most existing LIF-based neurons use a fixed spike frequency, which prevents them from attaining near-optimal accuracy. A research challenge is to design energy and area-efficient SNN neural cells that provide high learning accuracy and are scalable. Recently, the idea of tuning the spiking pulses in SNN was proposed and found promising. This work builds on the pulse-tuning idea by proposing an area and energy-efficient, stable, and reconfigurable SNN cell that generates spikes and reconfigures its pulse width to achieve near-optimal learning. It auto-adapts spike rate and duration to attain near-optimal accuracies for various SNN applications. The proposed cell is designed in mixed-signal, known to be beneficial to SNN, implemented using 45-nm technology, occupies an area of 27 <inline-formula><tex-math>$\\\\mu {\\\\rm m}^{2}$</tex-math></inline-formula>, incurs 1.86 <inline-formula><tex-math>$\\\\mu {\\\\rm W}$</tex-math></inline-formula>, and yields a high learning performance of 99.12%, 96.37%, and 78.64% in N-MNIST, MNIST, and N-Caltech101 datasets, respectively. The proposed cell attains higher accuracy, scalability, energy, and area economy than the state-of-the-art SNN neurons. Its energy efficiency and compact design make it highly suitable for sensor network applications and embedded systems requiring real-time, low-power neuromorphic computing.\",\"PeriodicalId\":13205,\"journal\":{\"name\":\"IEEE Open Journal of the Computer Society\",\"volume\":\"6 \",\"pages\":\"599-612\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10972324\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Computer Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10972324/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Computer Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10972324/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
尖峰神经网络(SNN)中的神经元使用电脉冲或尖峰进行通信。它们有条件地激发或触发,学习对这些触发的时间和持续时间很敏感。Leaky Integrate and Fire (LIF)模型是目前应用最广泛的SNN神经元模型。大多数现有的基于liff的神经元使用固定的尖峰频率,这使得它们无法达到接近最佳的精度。一个研究挑战是设计能量和面积高效的SNN神经细胞,提供高学习精度和可扩展。近年来,人们提出了对SNN中尖峰脉冲进行调谐的想法,并发现这一想法很有前景。这项工作建立在脉冲调谐思想的基础上,提出了一个面积和节能、稳定和可重构的SNN单元,该单元产生尖峰并重新配置其脉冲宽度,以实现近乎最佳的学习。它可以自动调整尖峰速率和持续时间,以达到各种SNN应用的接近最佳精度。所提出的单元采用混合信号设计,已知有利于SNN,采用45纳米技术实现,占地27 $\mu {\rm m}^{2}$,产生1.86 $\mu {\rm W}$,在N-MNIST, MNIST和N-Caltech101数据集上分别获得99.12%,96.37%和78.64%的高学习性能。与最先进的SNN神经元相比,所提出的单元具有更高的准确性、可扩展性、能量和区域经济性。它的能源效率和紧凑的设计使其非常适合传感器网络应用和嵌入式系统需要实时,低功耗的神经形态计算。
An Efficient Neural Cell Architecture for Spiking Neural Networks
Neurons in a Spiking Neural Network (SNN) communicate using electrical pulses or spikes. They fire or trigger conditionally, and learning is sensitive to such triggers' timing and duration. The Leaky Integrate and Fire (LIF) model is the most widely used SNN neuron model. Most existing LIF-based neurons use a fixed spike frequency, which prevents them from attaining near-optimal accuracy. A research challenge is to design energy and area-efficient SNN neural cells that provide high learning accuracy and are scalable. Recently, the idea of tuning the spiking pulses in SNN was proposed and found promising. This work builds on the pulse-tuning idea by proposing an area and energy-efficient, stable, and reconfigurable SNN cell that generates spikes and reconfigures its pulse width to achieve near-optimal learning. It auto-adapts spike rate and duration to attain near-optimal accuracies for various SNN applications. The proposed cell is designed in mixed-signal, known to be beneficial to SNN, implemented using 45-nm technology, occupies an area of 27 $\mu {\rm m}^{2}$, incurs 1.86 $\mu {\rm W}$, and yields a high learning performance of 99.12%, 96.37%, and 78.64% in N-MNIST, MNIST, and N-Caltech101 datasets, respectively. The proposed cell attains higher accuracy, scalability, energy, and area economy than the state-of-the-art SNN neurons. Its energy efficiency and compact design make it highly suitable for sensor network applications and embedded systems requiring real-time, low-power neuromorphic computing.