基于无训练初始化的进化神经结构搜索效率提高

Q. Phan, N. H. Luong
{"title":"基于无训练初始化的进化神经结构搜索效率提高","authors":"Q. Phan, N. H. Luong","doi":"10.1109/NICS54270.2021.9701573","DOIUrl":null,"url":null,"abstract":"In this paper, we adapt a method to enhance the efficiency of multi-objective evolutionary algorithms (MOEAs) when solving neural architecture search (NAS) problems by improving the initialization stage with minimal costs. Instead of sampling a small number of architectures from the search space, we sample a large number of architectures and estimate the performance of each one without invoking the computationally expensive training process but by using a zero-cost proxy. After ranking the architectures via their zero-cost proxy values and efficiency metrics, the best architectures are then chosen as the individuals of the initial population. To demonstrate the effectiveness of our method, we conduct experiments on the widely-used NAS-Bench-101 and NAS-Bench-201 benchmarks. Experimental results exhibit that the proposed method achieves not only considerable enhancements on the quality of initial populations but also on the overall performance of MOEAs in solving NAS problems. The source code of the paper is available at https://github.com/ELO-Lab/ENAS-TFI.","PeriodicalId":296963,"journal":{"name":"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Efficiency Enhancement of Evolutionary Neural Architecture Search via Training-Free Initialization\",\"authors\":\"Q. Phan, N. H. Luong\",\"doi\":\"10.1109/NICS54270.2021.9701573\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we adapt a method to enhance the efficiency of multi-objective evolutionary algorithms (MOEAs) when solving neural architecture search (NAS) problems by improving the initialization stage with minimal costs. Instead of sampling a small number of architectures from the search space, we sample a large number of architectures and estimate the performance of each one without invoking the computationally expensive training process but by using a zero-cost proxy. After ranking the architectures via their zero-cost proxy values and efficiency metrics, the best architectures are then chosen as the individuals of the initial population. To demonstrate the effectiveness of our method, we conduct experiments on the widely-used NAS-Bench-101 and NAS-Bench-201 benchmarks. Experimental results exhibit that the proposed method achieves not only considerable enhancements on the quality of initial populations but also on the overall performance of MOEAs in solving NAS problems. The source code of the paper is available at https://github.com/ELO-Lab/ENAS-TFI.\",\"PeriodicalId\":296963,\"journal\":{\"name\":\"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NICS54270.2021.9701573\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NICS54270.2021.9701573","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

在本文中,我们采用了一种方法来提高多目标进化算法(moea)在解决神经结构搜索(NAS)问题时的效率,该方法通过以最小的代价改进初始化阶段。我们不是从搜索空间中采样少量的体系结构,而是采样大量的体系结构,并通过使用零成本代理来估计每个体系结构的性能,而不调用计算代价高昂的训练过程。在通过它们的零成本代理值和效率度量对体系结构进行排序之后,然后选择最佳体系结构作为初始种群的个体。为了证明我们的方法的有效性,我们在广泛使用的NAS-Bench-101和NAS-Bench-201基准上进行了实验。实验结果表明,该方法不仅显著提高了初始种群的质量,而且提高了moea解决NAS问题的整体性能。该论文的源代码可在https://github.com/ELO-Lab/ENAS-TFI上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficiency Enhancement of Evolutionary Neural Architecture Search via Training-Free Initialization
In this paper, we adapt a method to enhance the efficiency of multi-objective evolutionary algorithms (MOEAs) when solving neural architecture search (NAS) problems by improving the initialization stage with minimal costs. Instead of sampling a small number of architectures from the search space, we sample a large number of architectures and estimate the performance of each one without invoking the computationally expensive training process but by using a zero-cost proxy. After ranking the architectures via their zero-cost proxy values and efficiency metrics, the best architectures are then chosen as the individuals of the initial population. To demonstrate the effectiveness of our method, we conduct experiments on the widely-used NAS-Bench-101 and NAS-Bench-201 benchmarks. Experimental results exhibit that the proposed method achieves not only considerable enhancements on the quality of initial populations but also on the overall performance of MOEAs in solving NAS problems. The source code of the paper is available at https://github.com/ELO-Lab/ENAS-TFI.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信