{"title":"Design and Implementation of a Highly Accurate Stochastic Spiking Neural Network","authors":"Chengcheng Tang, Jie Han","doi":"10.1109/SiPS52927.2021.00050","DOIUrl":null,"url":null,"abstract":"The emergence of spiking neural networks (SNNs) provide a promising approach to the energy efficient design of artificial neural networks (ANNs). The rate encoded computation in SNNs utilizes the number of spikes in a time window to encode the intensity of a signal, in a similar way to the information encoding in stochastic computing. Inspired by this similarity, this paper presents a hardware design of stochastic SNNs that attains a high accuracy. A design framework is elaborated for the input, hidden and output layers. This design takes advantage of a priority encoder to convert the spikes between layers of neurons into index-based signals and uses the cumulative distribution function of the signals for spike train generation. Thus, it mitigates the problem of a relatively low information density and reduces the usage of hardware resources in SNNs. This design is implemented in field programmable gate arrays (FPGAs) and its performance is evaluated on the MNIST image recognition dataset. Hardware costs are evaluated for different sizes of hidden layers in the stochastic SNNs and the recognition accuracy is obtained using different lengths of stochastic sequences. The results show that this stochastic SNN framework achieves a higher accuracy compared to other SNN designs and a comparable accuracy as their ANN counterparts. Hence, the proposed SNN design can be an effective alternative to achieving high accuracy in hardware constrained applications.","PeriodicalId":103894,"journal":{"name":"2021 IEEE Workshop on Signal Processing Systems (SiPS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Workshop on Signal Processing Systems (SiPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SiPS52927.2021.00050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The emergence of spiking neural networks (SNNs) provide a promising approach to the energy efficient design of artificial neural networks (ANNs). The rate encoded computation in SNNs utilizes the number of spikes in a time window to encode the intensity of a signal, in a similar way to the information encoding in stochastic computing. Inspired by this similarity, this paper presents a hardware design of stochastic SNNs that attains a high accuracy. A design framework is elaborated for the input, hidden and output layers. This design takes advantage of a priority encoder to convert the spikes between layers of neurons into index-based signals and uses the cumulative distribution function of the signals for spike train generation. Thus, it mitigates the problem of a relatively low information density and reduces the usage of hardware resources in SNNs. This design is implemented in field programmable gate arrays (FPGAs) and its performance is evaluated on the MNIST image recognition dataset. Hardware costs are evaluated for different sizes of hidden layers in the stochastic SNNs and the recognition accuracy is obtained using different lengths of stochastic sequences. The results show that this stochastic SNN framework achieves a higher accuracy compared to other SNN designs and a comparable accuracy as their ANN counterparts. Hence, the proposed SNN design can be an effective alternative to achieving high accuracy in hardware constrained applications.