{"title":"为 NAS 开发可配置的非等级搜索空间","authors":"","doi":"10.1016/j.neunet.2024.106700","DOIUrl":null,"url":null,"abstract":"<div><p>Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding’s suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards a configurable and non-hierarchical search space for NAS\",\"authors\":\"\",\"doi\":\"10.1016/j.neunet.2024.106700\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding’s suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.</p></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2024-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608024006245\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024006245","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
神经架构搜索(NAS)优于手工神经网络(NN)设计。然而,目前的 NAS 方法通常使用硬编码搜索空间和预定义分层架构。因此,根据新问题调整这些方法可能很麻烦,而且很难知道 NAS 算法和预定义分层结构中哪个对性能影响最大。为了提高灵活性,减少对专家知识的依赖,本文提出了一种 NAS 方法,其中的搜索空间可轻松定制,并允许进行全网搜索。在连续架构嵌入空间中,使用基于高斯过程(GP)的贝叶斯优化(BO)来执行 NAS。这种嵌入建立在瓦瑟斯坦自动编码器(Wasserstein Autoencoder)的基础上,并通过最大均值差异(MMD)惩罚和全输入凸神经网络(FICNN)潜在预测器对其进行正则化,以推断出架构的参数数量。本文首先通过解决两个计算成本低廉的问题来评估嵌入是否适合优化:最小化参数数和最大化零次精度代理。然后,基于两个不同的搜索空间,在 CIFAR-10 和 STL-10 上执行了复杂性感知 NAS 的两个变体,在有限的模型大小下提供了有竞争力的 NN 架构。
Towards a configurable and non-hierarchical search space for NAS
Neural Architecture Search (NAS) outperforms handcrafted Neural Network (NN) design. However, current NAS methods generally use hard-coded search spaces, and predefined hierarchical architectures. As a consequence, adapting them to a new problem can be cumbersome, and it is hard to know which of the NAS algorithm or the predefined hierarchical structure impacts performance the most. To improve flexibility, and be less reliant on expert knowledge, this paper proposes a NAS methodology in which the search space is easily customizable, and allows for full network search. NAS is performed with Gaussian Process (GP)-based Bayesian Optimization (BO) in a continuous architecture embedding space. This embedding is built upon a Wasserstein Autoencoder, regularized by both a Maximum Mean Discrepancy (MMD) penalization and a Fully Input Convex Neural Network (FICNN) latent predictor, trained to infer the parameter count of architectures. This paper first assesses the embedding’s suitability for optimization by solving 2 computationally inexpensive problems: minimizing the number of parameters, and maximizing a zero-shot accuracy proxy. Then, two variants of complexity-aware NAS are performed on CIFAR-10 and STL-10, based on two different search spaces, providing competitive NN architectures with limited model sizes.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.