SpikeNAS: A Fast Memory-Aware Neural Architecture Search Framework for Spiking Neural Network-Based Embedded AI Systems

Rachmad Vidya Wicaksana Putra;Muhammad Shafique
{"title":"SpikeNAS: A Fast Memory-Aware Neural Architecture Search Framework for Spiking Neural Network-Based Embedded AI Systems","authors":"Rachmad Vidya Wicaksana Putra;Muhammad Shafique","doi":"10.1109/TAI.2025.3586238","DOIUrl":null,"url":null,"abstract":"Embedded AI systems are expected to incur low power/energy consumption for solving machine learning tasks, as these systems are usually power constrained (e.g., object recognition task in autonomous mobile agents with portable batteries). These requirements can be fulfilled by spiking neural networks (SNNs), since their bio-inspired spike-based operations offer high accuracy and ultra low-power/energy computation. Currently, most of SNN architectures are derived from artificial neural networks whose neurons’ architectures and operations are different from SNNs, and/or developed without considering memory budgets from the underlying processing hardware of embedded platforms. These limitations hinder SNNs from reaching their full potential in accuracy and efficiency. Toward this, we propose <italic>SpikeNAS</i>, a novel fast memory-aware neural architecture search (NAS) framework for SNNs that quickly finds an appropriate SNN architecture with high accuracy under the given memory budgets from targeted embedded systems. To do this, our SpikeNAS employs several key steps: analyzing the impacts of network operations on the accuracy, enhancing the network architecture to improve the learning quality, developing a fast memory-aware search algorithm, and performing quantization. The experimental results show that our SpikeNAS improves the searching time and maintains high accuracy compared to state-of-the-art while meeting the given memory budgets (e.g., 29<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula>, 117<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula>, and 3.7<inline-formula><tex-math>$\\boldsymbol{\\times}$</tex-math></inline-formula> faster search for CIFAR10, CIFAR100, and TinyImageNet200, respectively, using an Nvidia RTX A6000 GPU machine), thereby quickly providing the appropriate SNN architecture for the memory-constrained embedded AI systems.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 2","pages":"947-959"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11071976/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Embedded AI systems are expected to incur low power/energy consumption for solving machine learning tasks, as these systems are usually power constrained (e.g., object recognition task in autonomous mobile agents with portable batteries). These requirements can be fulfilled by spiking neural networks (SNNs), since their bio-inspired spike-based operations offer high accuracy and ultra low-power/energy computation. Currently, most of SNN architectures are derived from artificial neural networks whose neurons’ architectures and operations are different from SNNs, and/or developed without considering memory budgets from the underlying processing hardware of embedded platforms. These limitations hinder SNNs from reaching their full potential in accuracy and efficiency. Toward this, we propose SpikeNAS, a novel fast memory-aware neural architecture search (NAS) framework for SNNs that quickly finds an appropriate SNN architecture with high accuracy under the given memory budgets from targeted embedded systems. To do this, our SpikeNAS employs several key steps: analyzing the impacts of network operations on the accuracy, enhancing the network architecture to improve the learning quality, developing a fast memory-aware search algorithm, and performing quantization. The experimental results show that our SpikeNAS improves the searching time and maintains high accuracy compared to state-of-the-art while meeting the given memory budgets (e.g., 29$\boldsymbol{\times}$, 117$\boldsymbol{\times}$, and 3.7$\boldsymbol{\times}$ faster search for CIFAR10, CIFAR100, and TinyImageNet200, respectively, using an Nvidia RTX A6000 GPU machine), thereby quickly providing the appropriate SNN architecture for the memory-constrained embedded AI systems.
SpikeNAS:基于SpikeNAS的嵌入式AI系统的快速内存感知神经架构搜索框架
嵌入式人工智能系统在解决机器学习任务时预计会产生低功耗/能耗,因为这些系统通常受到功率限制(例如,在带有便携式电池的自主移动代理中进行对象识别任务)。这些要求可以通过脉冲神经网络(snn)来满足,因为它们基于生物启发的脉冲操作提供了高精度和超低功耗/能量的计算。目前,大多数SNN架构来源于人工神经网络,其神经元的架构和操作与SNN不同,并且/或者在开发时没有考虑嵌入式平台底层处理硬件的内存预算。这些限制阻碍了snn在准确性和效率方面发挥其全部潜力。为此,我们提出了SpikeNAS,一种新颖的SNN快速内存感知神经架构搜索(NAS)框架,它可以在给定的内存预算下从目标嵌入式系统快速找到合适的SNN架构。为此,我们的SpikeNAS采用了几个关键步骤:分析网络操作对准确性的影响,增强网络架构以提高学习质量,开发快速的内存感知搜索算法,并执行量化。实验结果表明,我们的SpikeNAS在满足给定内存预算(例如,使用Nvidia RTX A6000 GPU机器,CIFAR10, CIFAR100和TinyImageNet200的搜索速度分别为29$\boldsymbol{\times}$, 117$\boldsymbol{\times}$和3.7$\boldsymbol{\times}$)的情况下,提高了搜索时间并保持了较高的准确性,从而为内存限制的嵌入式人工智能系统快速提供了合适的SNN架构。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书