Gengchen Sun;Zhengkun Liu;Lin Gan;Hang Su;Ting Li;Wenfeng Zhao;Biao Sun
{"title":"SpikeNAS-Bench: Benchmarking NAS Algorithms for Spiking Neural Network Architecture","authors":"Gengchen Sun;Zhengkun Liu;Lin Gan;Hang Su;Ting Li;Wenfeng Zhao;Biao Sun","doi":"10.1109/TAI.2025.3534136","DOIUrl":null,"url":null,"abstract":"In recent years, neural architecture search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial computational resources. To mitigate this, the development of NAS benchmarks has emerged, offering datasets that enumerate all potential network architectures and their performances within a predefined search space. Nonetheless, these benchmarks predominantly focus on convolutional architectures, which are criticized for their limited interpretability and suboptimal hardware efficiency. Recognizing the untapped potential of spiking neural networks (SNNs)—often hailed as the third generation of neural networks due to their biological realism and computational thrift—this study introduces SpikeNAS-Bench. As a pioneering benchmark for SNN, SpikeNAS-Bench utilizes a cell-based search space, integrating leaky integrate-and-fire neurons with variable thresholds as candidate operations. It encompasses 15 625 candidate architectures, rigorously evaluated on CIFAR10, CIFAR100, and Tiny-ImageNet datasets. This article delves into the architectural nuances of SpikeNAS-Bench, leveraging various criteria to underscore the benchmark's utility and presenting insights that could steer future NAS algorithm designs. Moreover, we assess the benchmark's consistency through three distinct proxy types: zero-cost-based, early-stop-based, and predictor-based proxies. Additionally, the article benchmarks seven contemporary NAS algorithms to attest to SpikeNAS-Bench's broad applicability. We commit to providing training logs, diagnostic data for all candidate architectures, and we promise to release all code and datasets postacceptance, aiming to catalyze further exploration and innovation within the SNN domain.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 6","pages":"1614-1625"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10855683/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, neural architecture search (NAS) has marked significant advancements, yet its efficacy is marred by the dependence on substantial computational resources. To mitigate this, the development of NAS benchmarks has emerged, offering datasets that enumerate all potential network architectures and their performances within a predefined search space. Nonetheless, these benchmarks predominantly focus on convolutional architectures, which are criticized for their limited interpretability and suboptimal hardware efficiency. Recognizing the untapped potential of spiking neural networks (SNNs)—often hailed as the third generation of neural networks due to their biological realism and computational thrift—this study introduces SpikeNAS-Bench. As a pioneering benchmark for SNN, SpikeNAS-Bench utilizes a cell-based search space, integrating leaky integrate-and-fire neurons with variable thresholds as candidate operations. It encompasses 15 625 candidate architectures, rigorously evaluated on CIFAR10, CIFAR100, and Tiny-ImageNet datasets. This article delves into the architectural nuances of SpikeNAS-Bench, leveraging various criteria to underscore the benchmark's utility and presenting insights that could steer future NAS algorithm designs. Moreover, we assess the benchmark's consistency through three distinct proxy types: zero-cost-based, early-stop-based, and predictor-based proxies. Additionally, the article benchmarks seven contemporary NAS algorithms to attest to SpikeNAS-Bench's broad applicability. We commit to providing training logs, diagnostic data for all candidate architectures, and we promise to release all code and datasets postacceptance, aiming to catalyze further exploration and innovation within the SNN domain.