{"title":"Training Low-Latency Spiking Neural Network through Knowledge Distillation","authors":"Sugahara Takuya, Renyuan Zhang, Y. Nakashima","doi":"10.1109/COOLCHIPS52128.2021.9410323","DOIUrl":null,"url":null,"abstract":"Spiking neural networks (SNNs) that enable greater computational efficiency on neuromorphic hardware have attracted attention. Existing ANN-SNN conversion methods can effectively convert the weights to SNNs from a pre-trained ANN model. However, the state-of-the-art ANN-SNN conversion methods suffer from accuracy loss and high inference latency due to ineffective conversion methods. To solve this problem, we train low-latency SNN through knowledge distillation with Kullback-Leibler divergence (KL divergence). We achieve superior accuracy on CIFAR-100, 74.42% for VGG16 architecture with 5 timesteps. To our best knowledge, our work performs the fastest inference without accuracy loss compared to other state-of-the-art SNN models.","PeriodicalId":103337,"journal":{"name":"2021 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium in Low-Power and High-Speed Chips (COOL CHIPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COOLCHIPS52128.2021.9410323","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
Spiking neural networks (SNNs) that enable greater computational efficiency on neuromorphic hardware have attracted attention. Existing ANN-SNN conversion methods can effectively convert the weights to SNNs from a pre-trained ANN model. However, the state-of-the-art ANN-SNN conversion methods suffer from accuracy loss and high inference latency due to ineffective conversion methods. To solve this problem, we train low-latency SNN through knowledge distillation with Kullback-Leibler divergence (KL divergence). We achieve superior accuracy on CIFAR-100, 74.42% for VGG16 architecture with 5 timesteps. To our best knowledge, our work performs the fastest inference without accuracy loss compared to other state-of-the-art SNN models.