{"title":"基于FPGA的脉冲神经网络仿真加速平台","authors":"H. Hellmich, H. Klar","doi":"10.1109/MWSCAS.2004.1354175","DOIUrl":null,"url":null,"abstract":"Today's field-programmable gate array (FPGA) technology offers a large number of IO pins in order to realize a high bandwidth distributed memory architecture. Our acceleration platform, called spiking neural network emulation engine (SEE), makes use of this fact in order to tackle the main bottleneck of memory bandwidth during the simulation of large networks and is capable to treat up to 2/sup 19/ neurons and more than 800 10/sup 6/ synaptic weights. The incorporated neuron state calculation can be reconfigured in order to consider sparse or dense connection schemes. Performance evaluations have revealed that the simulation time scales with the number of adaptive weights. The SEE architecture promises an acceleration by at least factors of 4 to 8 for laterally full-connected networks compared to simulations executed by a stand-alone PC.","PeriodicalId":185817,"journal":{"name":"The 2004 47th Midwest Symposium on Circuits and Systems, 2004. MWSCAS '04.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2004-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":"{\"title\":\"An FPGA based simulation acceleration platform for spiking neural networks\",\"authors\":\"H. Hellmich, H. Klar\",\"doi\":\"10.1109/MWSCAS.2004.1354175\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Today's field-programmable gate array (FPGA) technology offers a large number of IO pins in order to realize a high bandwidth distributed memory architecture. Our acceleration platform, called spiking neural network emulation engine (SEE), makes use of this fact in order to tackle the main bottleneck of memory bandwidth during the simulation of large networks and is capable to treat up to 2/sup 19/ neurons and more than 800 10/sup 6/ synaptic weights. The incorporated neuron state calculation can be reconfigured in order to consider sparse or dense connection schemes. Performance evaluations have revealed that the simulation time scales with the number of adaptive weights. The SEE architecture promises an acceleration by at least factors of 4 to 8 for laterally full-connected networks compared to simulations executed by a stand-alone PC.\",\"PeriodicalId\":185817,\"journal\":{\"name\":\"The 2004 47th Midwest Symposium on Circuits and Systems, 2004. MWSCAS '04.\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2004-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"14\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2004 47th Midwest Symposium on Circuits and Systems, 2004. MWSCAS '04.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MWSCAS.2004.1354175\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2004 47th Midwest Symposium on Circuits and Systems, 2004. MWSCAS '04.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MWSCAS.2004.1354175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An FPGA based simulation acceleration platform for spiking neural networks
Today's field-programmable gate array (FPGA) technology offers a large number of IO pins in order to realize a high bandwidth distributed memory architecture. Our acceleration platform, called spiking neural network emulation engine (SEE), makes use of this fact in order to tackle the main bottleneck of memory bandwidth during the simulation of large networks and is capable to treat up to 2/sup 19/ neurons and more than 800 10/sup 6/ synaptic weights. The incorporated neuron state calculation can be reconfigured in order to consider sparse or dense connection schemes. Performance evaluations have revealed that the simulation time scales with the number of adaptive weights. The SEE architecture promises an acceleration by at least factors of 4 to 8 for laterally full-connected networks compared to simulations executed by a stand-alone PC.