{"title":"Efficiency Enhancement of Evolutionary Neural Architecture Search via Training-Free Initialization","authors":"Q. Phan, N. H. Luong","doi":"10.1109/NICS54270.2021.9701573","DOIUrl":null,"url":null,"abstract":"In this paper, we adapt a method to enhance the efficiency of multi-objective evolutionary algorithms (MOEAs) when solving neural architecture search (NAS) problems by improving the initialization stage with minimal costs. Instead of sampling a small number of architectures from the search space, we sample a large number of architectures and estimate the performance of each one without invoking the computationally expensive training process but by using a zero-cost proxy. After ranking the architectures via their zero-cost proxy values and efficiency metrics, the best architectures are then chosen as the individuals of the initial population. To demonstrate the effectiveness of our method, we conduct experiments on the widely-used NAS-Bench-101 and NAS-Bench-201 benchmarks. Experimental results exhibit that the proposed method achieves not only considerable enhancements on the quality of initial populations but also on the overall performance of MOEAs in solving NAS problems. The source code of the paper is available at https://github.com/ELO-Lab/ENAS-TFI.","PeriodicalId":296963,"journal":{"name":"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NICS54270.2021.9701573","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper, we adapt a method to enhance the efficiency of multi-objective evolutionary algorithms (MOEAs) when solving neural architecture search (NAS) problems by improving the initialization stage with minimal costs. Instead of sampling a small number of architectures from the search space, we sample a large number of architectures and estimate the performance of each one without invoking the computationally expensive training process but by using a zero-cost proxy. After ranking the architectures via their zero-cost proxy values and efficiency metrics, the best architectures are then chosen as the individuals of the initial population. To demonstrate the effectiveness of our method, we conduct experiments on the widely-used NAS-Bench-101 and NAS-Bench-201 benchmarks. Experimental results exhibit that the proposed method achieves not only considerable enhancements on the quality of initial populations but also on the overall performance of MOEAs in solving NAS problems. The source code of the paper is available at https://github.com/ELO-Lab/ENAS-TFI.