Xinyi Chen, Jibin Wu, Chenxiang Ma, Yinsong Yan, Yujie Wu, Kay Chen Tan
{"title":"PMSN: A Parallel Multi-compartment Spiking Neuron for Multi-scale Temporal Processing","authors":"Xinyi Chen, Jibin Wu, Chenxiang Ma, Yinsong Yan, Yujie Wu, Kay Chen Tan","doi":"arxiv-2408.14917","DOIUrl":null,"url":null,"abstract":"Spiking Neural Networks (SNNs) hold great potential to realize\nbrain-inspired, energy-efficient computational systems. However, current SNNs\nstill fall short in terms of multi-scale temporal processing compared to their\nbiological counterparts. This limitation has resulted in poor performance in\nmany pattern recognition tasks with information that varies across different\ntimescales. To address this issue, we put forward a novel spiking neuron model\ncalled Parallel Multi-compartment Spiking Neuron (PMSN). The PMSN emulates\nbiological neurons by incorporating multiple interacting substructures and\nallows for flexible adjustment of the substructure counts to effectively\nrepresent temporal information across diverse timescales. Additionally, to\naddress the computational burden associated with the increased complexity of\nthe proposed model, we introduce two parallelization techniques that decouple\nthe temporal dependencies of neuronal updates, enabling parallelized training\nacross different time steps. Our experimental results on a wide range of\npattern recognition tasks demonstrate the superiority of PMSN. It outperforms\nother state-of-the-art spiking neuron models in terms of its temporal\nprocessing capacity, training speed, and computation cost. Specifically,\ncompared with the commonly used Leaky Integrate-and-Fire neuron, PMSN offers a\nsimulation acceleration of over 10 $\\times$ and a 30 % improvement in accuracy\non Sequential CIFAR10 dataset, while maintaining comparable computational cost.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"30 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.14917","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Spiking Neural Networks (SNNs) hold great potential to realize
brain-inspired, energy-efficient computational systems. However, current SNNs
still fall short in terms of multi-scale temporal processing compared to their
biological counterparts. This limitation has resulted in poor performance in
many pattern recognition tasks with information that varies across different
timescales. To address this issue, we put forward a novel spiking neuron model
called Parallel Multi-compartment Spiking Neuron (PMSN). The PMSN emulates
biological neurons by incorporating multiple interacting substructures and
allows for flexible adjustment of the substructure counts to effectively
represent temporal information across diverse timescales. Additionally, to
address the computational burden associated with the increased complexity of
the proposed model, we introduce two parallelization techniques that decouple
the temporal dependencies of neuronal updates, enabling parallelized training
across different time steps. Our experimental results on a wide range of
pattern recognition tasks demonstrate the superiority of PMSN. It outperforms
other state-of-the-art spiking neuron models in terms of its temporal
processing capacity, training speed, and computation cost. Specifically,
compared with the commonly used Leaky Integrate-and-Fire neuron, PMSN offers a
simulation acceleration of over 10 $\times$ and a 30 % improvement in accuracy
on Sequential CIFAR10 dataset, while maintaining comparable computational cost.