{"title":"Reducing the Spike Rate in Deep Spiking Neural Networks","authors":"R. Fontanini, D. Esseni, M. Loghi","doi":"10.1145/3546790.3546798","DOIUrl":null,"url":null,"abstract":"One objective of Spiking Neural Networks is a very efficient computation in terms of energy consumption. To achieve this target, a small spike rate is of course very beneficial since the event-driven nature of such a computation. However, as the network becomes deeper, the spike rate tends to increase without any improvements in the final results. On the other hand, the introduction of a penalty on the excess of spikes can often lead the network to a configuration where many neurons are silent, resulting in a drop of the computational efficacy. In this paper, we propose a learning strategy that keeps the spike rate under control, by (i) changing the loss function to penalize the spikes generated by neurons after the first ones, and by (ii) proposing a two-phase training that avoids silent neurons during the training.","PeriodicalId":104528,"journal":{"name":"Proceedings of the International Conference on Neuromorphic Systems 2022","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the International Conference on Neuromorphic Systems 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3546790.3546798","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
One objective of Spiking Neural Networks is a very efficient computation in terms of energy consumption. To achieve this target, a small spike rate is of course very beneficial since the event-driven nature of such a computation. However, as the network becomes deeper, the spike rate tends to increase without any improvements in the final results. On the other hand, the introduction of a penalty on the excess of spikes can often lead the network to a configuration where many neurons are silent, resulting in a drop of the computational efficacy. In this paper, we propose a learning strategy that keeps the spike rate under control, by (i) changing the loss function to penalize the spikes generated by neurons after the first ones, and by (ii) proposing a two-phase training that avoids silent neurons during the training.