{"title":"基于Spike-Time-Dependent-Plasticity Rule的小批量卷积窗表示学习训练","authors":"Yohei Shimmyo, Y. Okuyama","doi":"10.1109/MCSoC51149.2021.00052","DOIUrl":null,"url":null,"abstract":"This paper presents a mini-batch training methodology along convolutional windows for layer-wised STDP unsupervised training on convolutional layers in order to shorten the training time of spiking neural networks (SNNs). SNN is a third-generation neural network that uses an accurate neuron model compared to rate-coded models used in conventional artificial neural networks (ANNs). The mini-batches of input convolution windows are convoluted at once. Then, the input, output, and current filter generate a batch of weight updates at once. This system reduces overheads of library calls or GPU execution. The batch processing methodology leads more significant and extensive models to be trained in ANNs, while many evaluations of direct SNN training methodologies are limited to smaller models. Currently, training large-scale models is virtually impossible. We evaluated the mini-batch processing effect on training speed and feature extraction power against various mini-batch sizes. The result showed that a larger mini-batch size enables us to utilize GPUs effectively, maintaining comparable feature extraction power. This research concludes that mini-batch training along convolution windows reduces training time by STDP training rule.","PeriodicalId":166811,"journal":{"name":"2021 IEEE 14th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mini-Batch Training along Convolution Windows for Representation Learning Based on Spike-Time-Dependent-Plasticity Rule\",\"authors\":\"Yohei Shimmyo, Y. Okuyama\",\"doi\":\"10.1109/MCSoC51149.2021.00052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a mini-batch training methodology along convolutional windows for layer-wised STDP unsupervised training on convolutional layers in order to shorten the training time of spiking neural networks (SNNs). SNN is a third-generation neural network that uses an accurate neuron model compared to rate-coded models used in conventional artificial neural networks (ANNs). The mini-batches of input convolution windows are convoluted at once. Then, the input, output, and current filter generate a batch of weight updates at once. This system reduces overheads of library calls or GPU execution. The batch processing methodology leads more significant and extensive models to be trained in ANNs, while many evaluations of direct SNN training methodologies are limited to smaller models. Currently, training large-scale models is virtually impossible. We evaluated the mini-batch processing effect on training speed and feature extraction power against various mini-batch sizes. The result showed that a larger mini-batch size enables us to utilize GPUs effectively, maintaining comparable feature extraction power. This research concludes that mini-batch training along convolution windows reduces training time by STDP training rule.\",\"PeriodicalId\":166811,\"journal\":{\"name\":\"2021 IEEE 14th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 14th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MCSoC51149.2021.00052\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 14th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MCSoC51149.2021.00052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mini-Batch Training along Convolution Windows for Representation Learning Based on Spike-Time-Dependent-Plasticity Rule
This paper presents a mini-batch training methodology along convolutional windows for layer-wised STDP unsupervised training on convolutional layers in order to shorten the training time of spiking neural networks (SNNs). SNN is a third-generation neural network that uses an accurate neuron model compared to rate-coded models used in conventional artificial neural networks (ANNs). The mini-batches of input convolution windows are convoluted at once. Then, the input, output, and current filter generate a batch of weight updates at once. This system reduces overheads of library calls or GPU execution. The batch processing methodology leads more significant and extensive models to be trained in ANNs, while many evaluations of direct SNN training methodologies are limited to smaller models. Currently, training large-scale models is virtually impossible. We evaluated the mini-batch processing effect on training speed and feature extraction power against various mini-batch sizes. The result showed that a larger mini-batch size enables us to utilize GPUs effectively, maintaining comparable feature extraction power. This research concludes that mini-batch training along convolution windows reduces training time by STDP training rule.