{"title":"An FPGA Implementation of On-Chip Trainable Multilayer SAM Spiking Neural Network","authors":"M. Motoki, Ryuji Waseda, Terumitsu Nishimuta","doi":"10.12792/ICIAE2021.027","DOIUrl":null,"url":null,"abstract":"This paper describes an implementation of a multilayer SAM spiking neural network into the PL logic part (FPGA) in a Xilinx Zynq processor. The SAM neuron model is a type of one of the most popular LIF spiking neuron model. The SAM neural network can be an on-chip trainable model because the model does not require any multiplier under our proposed Back Propagation (BP) base training algorithm. As a result, the model achieved an XOR logic element in a unit of spikes. Moreover, we achieved a multiplier-less implementation with our intended algorithm and architecture. The design allows an arbitrary number setting for the hidden and output neurons.","PeriodicalId":161085,"journal":{"name":"The Proceedings of The 9th IIAE International Conference on Industrial Application Engineering 2020","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Proceedings of The 9th IIAE International Conference on Industrial Application Engineering 2020","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12792/ICIAE2021.027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper describes an implementation of a multilayer SAM spiking neural network into the PL logic part (FPGA) in a Xilinx Zynq processor. The SAM neuron model is a type of one of the most popular LIF spiking neuron model. The SAM neural network can be an on-chip trainable model because the model does not require any multiplier under our proposed Back Propagation (BP) base training algorithm. As a result, the model achieved an XOR logic element in a unit of spikes. Moreover, we achieved a multiplier-less implementation with our intended algorithm and architecture. The design allows an arbitrary number setting for the hidden and output neurons.