{"title":"ANAS: Software–hardware co-design of approximate neural network accelerators via neural architecture search","authors":"Ying Wu, Zheyu Yan, Xunzhao Yin, Lenian He, Cheng Zhuo","doi":"10.1016/j.vlsi.2025.102469","DOIUrl":null,"url":null,"abstract":"<div><div>Deep Neural Networks (DNNs) are prevalent solutions for perception tasks, with energy efficiency being particularly critical for deployment on edge platforms. Various studies have proposed efficient DNN edge deployment solutions; however, an important aspect – approximate computing – has been overlooked. Current research primarily focuses on designing approximate circuits for specific DNN models, neglecting the influence of DNN architecture design. To address this gap, this paper proposes a software–hardware co-exploration framework for approximate DNN accelerator design that jointly explores approximate multipliers and neural architectures. This framework, termed Approximate Neural Architecture Search (ANAS), tackles two main challenges: (1) efficiently evaluating the impact of approximate multipliers on application performance and accelerator design for each sample, and (2) effectively navigating a large design space to identify optimal configurations. The framework employs a recurrent neural network-based reinforcement learning algorithm to identify an optimal approximate multiplier-DNN architecture pair that balances DNN accuracy and hardware cost. Experimental results demonstrate that ANAS achieves comparable accuracy while reducing energy consumption and latency by up to 40% compared to state-of-the-art NAS-based methods.</div></div>","PeriodicalId":54973,"journal":{"name":"Integration-The Vlsi Journal","volume":"104 ","pages":"Article 102469"},"PeriodicalIF":2.2000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Integration-The Vlsi Journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167926025001269","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Neural Networks (DNNs) are prevalent solutions for perception tasks, with energy efficiency being particularly critical for deployment on edge platforms. Various studies have proposed efficient DNN edge deployment solutions; however, an important aspect – approximate computing – has been overlooked. Current research primarily focuses on designing approximate circuits for specific DNN models, neglecting the influence of DNN architecture design. To address this gap, this paper proposes a software–hardware co-exploration framework for approximate DNN accelerator design that jointly explores approximate multipliers and neural architectures. This framework, termed Approximate Neural Architecture Search (ANAS), tackles two main challenges: (1) efficiently evaluating the impact of approximate multipliers on application performance and accelerator design for each sample, and (2) effectively navigating a large design space to identify optimal configurations. The framework employs a recurrent neural network-based reinforcement learning algorithm to identify an optimal approximate multiplier-DNN architecture pair that balances DNN accuracy and hardware cost. Experimental results demonstrate that ANAS achieves comparable accuracy while reducing energy consumption and latency by up to 40% compared to state-of-the-art NAS-based methods.
期刊介绍:
Integration''s aim is to cover every aspect of the VLSI area, with an emphasis on cross-fertilization between various fields of science, and the design, verification, test and applications of integrated circuits and systems, as well as closely related topics in process and device technologies. Individual issues will feature peer-reviewed tutorials and articles as well as reviews of recent publications. The intended coverage of the journal can be assessed by examining the following (non-exclusive) list of topics:
Specification methods and languages; Analog/Digital Integrated Circuits and Systems; VLSI architectures; Algorithms, methods and tools for modeling, simulation, synthesis and verification of integrated circuits and systems of any complexity; Embedded systems; High-level synthesis for VLSI systems; Logic synthesis and finite automata; Testing, design-for-test and test generation algorithms; Physical design; Formal verification; Algorithms implemented in VLSI systems; Systems engineering; Heterogeneous systems.