{"title":"超参数优化的动态保真度选择","authors":"Shintaro Takenaga, Yoshihiko Ozaki, Masaki Onishi","doi":"10.1145/3583133.3596320","DOIUrl":null,"url":null,"abstract":"The dramatic growth of deep learning over the past decade has increased the demand for effective hyperparameter optimization (HPO). At the moment, evolutionary algorithms such as the covariance matrix adaptation evolution strategy (CMA-ES) are recognized as one of the most promising approaches for HPO. However, it is often problematic for practitioners that HPO is a time-consuming task because of its computationally expensive objective even if evaluations were parallelized in each generation of an evolutionary algorithm. To address the problem, multi-fidelity optimization that exploits cheap-to-evaluate lower-fidelity alternatives instead of the true maximum-fidelity objective can be utilized for faster optimization. In this paper, we introduce a new fidelity-selecting strategy designed to solve HPO problems with an evolutionary algorithm. Then, we demonstrate that the CMA-ES with the proposed strategy accelerates the search by about 8.5%--15% compared with the vanilla CMA-ES while keeping the quality of the solutions obtained.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dynamic Fidelity Selection for Hyperparameter Optimization\",\"authors\":\"Shintaro Takenaga, Yoshihiko Ozaki, Masaki Onishi\",\"doi\":\"10.1145/3583133.3596320\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The dramatic growth of deep learning over the past decade has increased the demand for effective hyperparameter optimization (HPO). At the moment, evolutionary algorithms such as the covariance matrix adaptation evolution strategy (CMA-ES) are recognized as one of the most promising approaches for HPO. However, it is often problematic for practitioners that HPO is a time-consuming task because of its computationally expensive objective even if evaluations were parallelized in each generation of an evolutionary algorithm. To address the problem, multi-fidelity optimization that exploits cheap-to-evaluate lower-fidelity alternatives instead of the true maximum-fidelity objective can be utilized for faster optimization. In this paper, we introduce a new fidelity-selecting strategy designed to solve HPO problems with an evolutionary algorithm. Then, we demonstrate that the CMA-ES with the proposed strategy accelerates the search by about 8.5%--15% compared with the vanilla CMA-ES while keeping the quality of the solutions obtained.\",\"PeriodicalId\":422029,\"journal\":{\"name\":\"Proceedings of the Companion Conference on Genetic and Evolutionary Computation\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Companion Conference on Genetic and Evolutionary Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3583133.3596320\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3583133.3596320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dynamic Fidelity Selection for Hyperparameter Optimization
The dramatic growth of deep learning over the past decade has increased the demand for effective hyperparameter optimization (HPO). At the moment, evolutionary algorithms such as the covariance matrix adaptation evolution strategy (CMA-ES) are recognized as one of the most promising approaches for HPO. However, it is often problematic for practitioners that HPO is a time-consuming task because of its computationally expensive objective even if evaluations were parallelized in each generation of an evolutionary algorithm. To address the problem, multi-fidelity optimization that exploits cheap-to-evaluate lower-fidelity alternatives instead of the true maximum-fidelity objective can be utilized for faster optimization. In this paper, we introduce a new fidelity-selecting strategy designed to solve HPO problems with an evolutionary algorithm. Then, we demonstrate that the CMA-ES with the proposed strategy accelerates the search by about 8.5%--15% compared with the vanilla CMA-ES while keeping the quality of the solutions obtained.