{"title":"Hardware-aware Neural Architecture Search with segmentation-based selection","authors":"Taehee Jeong, Elliott Delaye","doi":"10.1109/ICICT55905.2022.00029","DOIUrl":null,"url":null,"abstract":"Hardware-aware Neural Architecture Search (HW-NAS) has been drawing increasing attention since it can auto-matically design deep neural networks optimized in a resource-constrained device. However, it requires enormous amount of computations, which is not affordable for many. Thus, we propose an efficient method for searching promising neural architectures in HW-NAS. We can significantly reduce computing cost of search using both an accuracy predictor and a latency estimator and sharing pre-trained weights of a super-network. Overall searching procedure takes under 1 minute on a single CPU, which is tremendous improvement compared to general NAS work which requires several days or weeks on a single GPU. To search neural architectures under multiple objectives, we propose segmentation-based selection in search stage. The experimental results show our approach is very competitive compared with other multi-objective optimized methods. For a target hardware, we experimented on Field Programmable Gate Array (FPGA) and compared the results with modern CPUs.","PeriodicalId":273927,"journal":{"name":"2022 5th International Conference on Information and Computer Technologies (ICICT)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th International Conference on Information and Computer Technologies (ICICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICT55905.2022.00029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Hardware-aware Neural Architecture Search (HW-NAS) has been drawing increasing attention since it can auto-matically design deep neural networks optimized in a resource-constrained device. However, it requires enormous amount of computations, which is not affordable for many. Thus, we propose an efficient method for searching promising neural architectures in HW-NAS. We can significantly reduce computing cost of search using both an accuracy predictor and a latency estimator and sharing pre-trained weights of a super-network. Overall searching procedure takes under 1 minute on a single CPU, which is tremendous improvement compared to general NAS work which requires several days or weeks on a single GPU. To search neural architectures under multiple objectives, we propose segmentation-based selection in search stage. The experimental results show our approach is very competitive compared with other multi-objective optimized methods. For a target hardware, we experimented on Field Programmable Gate Array (FPGA) and compared the results with modern CPUs.