{"title":"The Potential of Combined Learning Strategies to Enhance Energy Efficiency of Spiking Neuromorphic Systems","authors":"Ali Shiri Sichani, Sai Kankatala","doi":"arxiv-2408.07150","DOIUrl":null,"url":null,"abstract":"Ensuring energy-efficient design in neuromorphic computing systems\nnecessitates a tailored architecture combined with algorithmic approaches. This\nmanuscript focuses on enhancing brain-inspired perceptual computing machines\nthrough a novel combined learning approach for Convolutional Spiking Neural\nNetworks (CSNNs). CSNNs present a promising alternative to traditional\npower-intensive and complex machine learning methods like backpropagation,\noffering energy-efficient spiking neuron processing inspired by the human\nbrain. The proposed combined learning method integrates Pair-based Spike\nTiming-Dependent Plasticity (PSTDP) and power law-dependent\nSpike-timing-dependent plasticity (STDP) to adjust synaptic efficacies,\nenabling the utilization of stochastic elements like memristive devices to\nenhance energy efficiency and improve perceptual computing accuracy. By\nreducing learning parameters while maintaining accuracy, these systems consume\nless energy and have reduced area overhead, making them more suitable for\nhardware implementation. The research delves into neuromorphic design\narchitectures, focusing on CSNNs to provide a general framework for\nenergy-efficient computing hardware. Various CSNN architectures are evaluated\nto assess how less trainable parameters can maintain acceptable accuracy in\nperceptual computing systems, positioning them as viable candidates for\nneuromorphic architecture. Comparisons with previous work validate the\nachievements and methodology of the proposed architecture.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"28 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.07150","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ensuring energy-efficient design in neuromorphic computing systems
necessitates a tailored architecture combined with algorithmic approaches. This
manuscript focuses on enhancing brain-inspired perceptual computing machines
through a novel combined learning approach for Convolutional Spiking Neural
Networks (CSNNs). CSNNs present a promising alternative to traditional
power-intensive and complex machine learning methods like backpropagation,
offering energy-efficient spiking neuron processing inspired by the human
brain. The proposed combined learning method integrates Pair-based Spike
Timing-Dependent Plasticity (PSTDP) and power law-dependent
Spike-timing-dependent plasticity (STDP) to adjust synaptic efficacies,
enabling the utilization of stochastic elements like memristive devices to
enhance energy efficiency and improve perceptual computing accuracy. By
reducing learning parameters while maintaining accuracy, these systems consume
less energy and have reduced area overhead, making them more suitable for
hardware implementation. The research delves into neuromorphic design
architectures, focusing on CSNNs to provide a general framework for
energy-efficient computing hardware. Various CSNN architectures are evaluated
to assess how less trainable parameters can maintain acceptable accuracy in
perceptual computing systems, positioning them as viable candidates for
neuromorphic architecture. Comparisons with previous work validate the
achievements and methodology of the proposed architecture.