Yuxuan Yang , Qihu Xie , Zihao Xuan , Song Chen , Yi Kang
{"title":"A neuromorphic hardware architecture based on TTFS coding with temporal quantization for spiking neural networks","authors":"Yuxuan Yang , Qihu Xie , Zihao Xuan , Song Chen , Yi Kang","doi":"10.1016/j.vlsi.2025.102403","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, spiking neural networks (SNNs) have gained significant attention due to their biologically realistic and event-driven properties. Time-to-First-Spike (TTFS) coding is a coding scheme for SNNs where neurons are fired only once throughout the inference process, reducing the number of spikes and improving energy efficiency. However, the SNNs with TTFS coding have an issue of low classification accuracy. This paper first introduces TQ-TTFS, a temporal quantization on the TTFS neuron model to address this issue. TQ-TTFS significantly alleviates overfitting caused by early firing and improves the classification accuracy of SNNs. Based on TQ-TTFS, we design a hardware architecture with a new inference scheme called Hybrid Priority Inference (HPI) which greatly reduces the frequency of weight access and supports temporal parallel computation. To further decrease storage overhead, we also introduce shared storage and membrane potential quantization. The proposed temporal quantization neuron model and hardware architecture demonstrate excellent performance. Our simulations show that TQ-TTFS achieves classification accuracy of 98.6% on the MNIST dataset, 90.2% on the FashionMNIST dataset, and 80.54% on the CIFAR-10 dataset, which are better than SOTA among temporal coded SNNs. Our FPGA implementation of the proposed hardware architecture has inference time of only 4.4 <span><math><mi>μ</mi></math></span>s per image on the MNIST dataset and 24 <span><math><mi>μ</mi></math></span>s per image on the FashionMNIST dataset. The energy consumption for these inferences is only 4 <span><math><mi>μ</mi></math></span>J and 32 <span><math><mi>μ</mi></math></span>J, respectively.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"103 ","pages":"Article 102403"},"PeriodicalIF":2.2000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Integration-The Vlsi Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167926025000604","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, spiking neural networks (SNNs) have gained significant attention due to their biologically realistic and event-driven properties. Time-to-First-Spike (TTFS) coding is a coding scheme for SNNs where neurons are fired only once throughout the inference process, reducing the number of spikes and improving energy efficiency. However, the SNNs with TTFS coding have an issue of low classification accuracy. This paper first introduces TQ-TTFS, a temporal quantization on the TTFS neuron model to address this issue. TQ-TTFS significantly alleviates overfitting caused by early firing and improves the classification accuracy of SNNs. Based on TQ-TTFS, we design a hardware architecture with a new inference scheme called Hybrid Priority Inference (HPI) which greatly reduces the frequency of weight access and supports temporal parallel computation. To further decrease storage overhead, we also introduce shared storage and membrane potential quantization. The proposed temporal quantization neuron model and hardware architecture demonstrate excellent performance. Our simulations show that TQ-TTFS achieves classification accuracy of 98.6% on the MNIST dataset, 90.2% on the FashionMNIST dataset, and 80.54% on the CIFAR-10 dataset, which are better than SOTA among temporal coded SNNs. Our FPGA implementation of the proposed hardware architecture has inference time of only 4.4 s per image on the MNIST dataset and 24 s per image on the FashionMNIST dataset. The energy consumption for these inferences is only 4 J and 32 J, respectively.
期刊介绍:
Integration''s aim is to cover every aspect of the VLSI area, with an emphasis on cross-fertilization between various fields of science, and the design, verification, test and applications of integrated circuits and systems, as well as closely related topics in process and device technologies. Individual issues will feature peer-reviewed tutorials and articles as well as reviews of recent publications. The intended coverage of the journal can be assessed by examining the following (non-exclusive) list of topics:
Specification methods and languages; Analog/Digital Integrated Circuits and Systems; VLSI architectures; Algorithms, methods and tools for modeling, simulation, synthesis and verification of integrated circuits and systems of any complexity; Embedded systems; High-level synthesis for VLSI systems; Logic synthesis and finite automata; Testing, design-for-test and test generation algorithms; Physical design; Formal verification; Algorithms implemented in VLSI systems; Systems engineering; Heterogeneous systems.