{"title":"SpiKernel: A Kernel Size Exploration Methodology for Improving Accuracy of the Embedded Spiking Neural Network Systems","authors":"Rachmad Vidya Wicaksana Putra;Muhammad Shafique","doi":"10.1109/LES.2024.3510197","DOIUrl":null,"url":null,"abstract":"Spiking neural networks (SNNs) can offer ultralow power/energy consumption for machine learning-based application tasks due to their sparse spike-based operations. Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy, which is not suitable for resource-constrained embedded applications. Therefore, developing SNNs that can achieve high accuracy with acceptable memory footprint is highly needed. Toward this, we propose SpiKernel, a novel methodology that improves the accuracy of SNNs through kernel size exploration. Its key steps include: 1) investigating the impact of different kernel sizes on the accuracy; 2) devising new sets of kernel sizes; 3) generating SNN architectures using neural architecture search based on the selected kernel sizes; and 4) analyzing the accuracy-memory tradeoffs for SNN model selection. The experimental results show that our SpiKernel achieves higher accuracy than state-of-the-art works (i.e., 93.24% for CIFAR10, 70.84% for CIFAR100, and 62% for TinyImageNet) with less than 10 M parameters and up to <inline-formula> <tex-math>$4.8\\times $ </tex-math></inline-formula> speed-up of searching time, thereby making it suitable for embedded applications.","PeriodicalId":56143,"journal":{"name":"IEEE Embedded Systems Letters","volume":"17 3","pages":"151-155"},"PeriodicalIF":2.0000,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Embedded Systems Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10772204/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Spiking neural networks (SNNs) can offer ultralow power/energy consumption for machine learning-based application tasks due to their sparse spike-based operations. Currently, most of the SNN architectures need a significantly larger model size to achieve higher accuracy, which is not suitable for resource-constrained embedded applications. Therefore, developing SNNs that can achieve high accuracy with acceptable memory footprint is highly needed. Toward this, we propose SpiKernel, a novel methodology that improves the accuracy of SNNs through kernel size exploration. Its key steps include: 1) investigating the impact of different kernel sizes on the accuracy; 2) devising new sets of kernel sizes; 3) generating SNN architectures using neural architecture search based on the selected kernel sizes; and 4) analyzing the accuracy-memory tradeoffs for SNN model selection. The experimental results show that our SpiKernel achieves higher accuracy than state-of-the-art works (i.e., 93.24% for CIFAR10, 70.84% for CIFAR100, and 62% for TinyImageNet) with less than 10 M parameters and up to $4.8\times $ speed-up of searching time, thereby making it suitable for embedded applications.
期刊介绍:
The IEEE Embedded Systems Letters (ESL), provides a forum for rapid dissemination of latest technical advances in embedded systems and related areas in embedded software. The emphasis is on models, methods, and tools that ensure secure, correct, efficient and robust design of embedded systems and their applications.