超参数优化的动态保真度选择

Shintaro Takenaga, Yoshihiko Ozaki, Masaki Onishi
{"title":"超参数优化的动态保真度选择","authors":"Shintaro Takenaga, Yoshihiko Ozaki, Masaki Onishi","doi":"10.1145/3583133.3596320","DOIUrl":null,"url":null,"abstract":"The dramatic growth of deep learning over the past decade has increased the demand for effective hyperparameter optimization (HPO). At the moment, evolutionary algorithms such as the covariance matrix adaptation evolution strategy (CMA-ES) are recognized as one of the most promising approaches for HPO. However, it is often problematic for practitioners that HPO is a time-consuming task because of its computationally expensive objective even if evaluations were parallelized in each generation of an evolutionary algorithm. To address the problem, multi-fidelity optimization that exploits cheap-to-evaluate lower-fidelity alternatives instead of the true maximum-fidelity objective can be utilized for faster optimization. In this paper, we introduce a new fidelity-selecting strategy designed to solve HPO problems with an evolutionary algorithm. Then, we demonstrate that the CMA-ES with the proposed strategy accelerates the search by about 8.5%--15% compared with the vanilla CMA-ES while keeping the quality of the solutions obtained.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dynamic Fidelity Selection for Hyperparameter Optimization\",\"authors\":\"Shintaro Takenaga, Yoshihiko Ozaki, Masaki Onishi\",\"doi\":\"10.1145/3583133.3596320\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The dramatic growth of deep learning over the past decade has increased the demand for effective hyperparameter optimization (HPO). At the moment, evolutionary algorithms such as the covariance matrix adaptation evolution strategy (CMA-ES) are recognized as one of the most promising approaches for HPO. However, it is often problematic for practitioners that HPO is a time-consuming task because of its computationally expensive objective even if evaluations were parallelized in each generation of an evolutionary algorithm. To address the problem, multi-fidelity optimization that exploits cheap-to-evaluate lower-fidelity alternatives instead of the true maximum-fidelity objective can be utilized for faster optimization. In this paper, we introduce a new fidelity-selecting strategy designed to solve HPO problems with an evolutionary algorithm. Then, we demonstrate that the CMA-ES with the proposed strategy accelerates the search by about 8.5%--15% compared with the vanilla CMA-ES while keeping the quality of the solutions obtained.\",\"PeriodicalId\":422029,\"journal\":{\"name\":\"Proceedings of the Companion Conference on Genetic and Evolutionary Computation\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Companion Conference on Genetic and Evolutionary Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3583133.3596320\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3583133.3596320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在过去的十年中,深度学习的急剧增长增加了对有效超参数优化(HPO)的需求。目前,协方差矩阵自适应进化策略(CMA-ES)等进化算法被认为是求解HPO最有前途的方法之一。然而,对于实践者来说,HPO是一个耗时的任务,因为即使在每一代进化算法中评估是并行的,它的计算目标也是昂贵的。为了解决这个问题,利用低成本的低保真度替代方案而不是真正的最大保真度目标的多保真度优化可以用于更快的优化。本文提出了一种新的保真度选择策略,该策略采用进化算法求解HPO问题。然后,我们证明了采用该策略的CMA-ES与传统的CMA-ES相比,在保持解的质量的同时,搜索速度提高了8.5%- 15%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Dynamic Fidelity Selection for Hyperparameter Optimization
The dramatic growth of deep learning over the past decade has increased the demand for effective hyperparameter optimization (HPO). At the moment, evolutionary algorithms such as the covariance matrix adaptation evolution strategy (CMA-ES) are recognized as one of the most promising approaches for HPO. However, it is often problematic for practitioners that HPO is a time-consuming task because of its computationally expensive objective even if evaluations were parallelized in each generation of an evolutionary algorithm. To address the problem, multi-fidelity optimization that exploits cheap-to-evaluate lower-fidelity alternatives instead of the true maximum-fidelity objective can be utilized for faster optimization. In this paper, we introduce a new fidelity-selecting strategy designed to solve HPO problems with an evolutionary algorithm. Then, we demonstrate that the CMA-ES with the proposed strategy accelerates the search by about 8.5%--15% compared with the vanilla CMA-ES while keeping the quality of the solutions obtained.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信