Scaling Up Optuna: P2P Distributed Hyperparameters Optimization

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Loïc Cudennec
{"title":"Scaling Up Optuna: P2P Distributed Hyperparameters Optimization","authors":"Loïc Cudennec","doi":"10.1002/cpe.70008","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>In machine learning (ML), hyperparameter optimization (HPO) is the process of choosing a tuple of values that ensures an efficient deployment and training of an AI model. In practice, HPO not only applies to ML tuning but can also be used to tune complex numerical simulations. In this context, a numerical model of a given object is created to be used in realistic simulations. This model is defined by a set of values describing properties such as the geometry of the object or other unknown parameters related to physical quantities. While HPO for ML usually requires finding a few parameters, a numerical model can involve the tuning of more than a hundred parameters. As a consequence, a large number of tuples have to be explored and evaluated before finding a relevant solution, offering new challenges in high-performance computing for efficiently driving the optimization. In this work we rely on the Optuna HPO framework, primarily designed for ML tasks and including state-of-the-art sampling and pruning algorithms. We report on its use to optimize a complex numerical model onto a 1024-core machine. We suggest 1.5M tuples and evaluate 5M simulations using different Optuna-distributed layouts to build several tradeoffs between performance and energy consumption metrics. In order to further scale up the optimization process onto resources, we introduce OptunaP2P, an extension of Optuna based on the peer-to-peer paradigm. This allows to remove any bottleneck in the management of the shared knowledge between optimization processes. With OptunaP2P, we were able to compute up to 3 times faster compared to the regular Optuna-distributed implementation and to obtain close-to-similar results in terms of quality in this reduced time-frame.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 4-5","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.70008","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

In machine learning (ML), hyperparameter optimization (HPO) is the process of choosing a tuple of values that ensures an efficient deployment and training of an AI model. In practice, HPO not only applies to ML tuning but can also be used to tune complex numerical simulations. In this context, a numerical model of a given object is created to be used in realistic simulations. This model is defined by a set of values describing properties such as the geometry of the object or other unknown parameters related to physical quantities. While HPO for ML usually requires finding a few parameters, a numerical model can involve the tuning of more than a hundred parameters. As a consequence, a large number of tuples have to be explored and evaluated before finding a relevant solution, offering new challenges in high-performance computing for efficiently driving the optimization. In this work we rely on the Optuna HPO framework, primarily designed for ML tasks and including state-of-the-art sampling and pruning algorithms. We report on its use to optimize a complex numerical model onto a 1024-core machine. We suggest 1.5M tuples and evaluate 5M simulations using different Optuna-distributed layouts to build several tradeoffs between performance and energy consumption metrics. In order to further scale up the optimization process onto resources, we introduce OptunaP2P, an extension of Optuna based on the peer-to-peer paradigm. This allows to remove any bottleneck in the management of the shared knowledge between optimization processes. With OptunaP2P, we were able to compute up to 3 times faster compared to the regular Optuna-distributed implementation and to obtain close-to-similar results in terms of quality in this reduced time-frame.

扩大 Optuna 的规模:P2P 分布式超参数优化
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Concurrency and Computation-Practice & Experience
Concurrency and Computation-Practice & Experience 工程技术-计算机:理论方法
CiteScore
5.00
自引率
10.00%
发文量
664
审稿时长
9.6 months
期刊介绍: Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of: Parallel and distributed computing; High-performance computing; Computational and data science; Artificial intelligence and machine learning; Big data applications, algorithms, and systems; Network science; Ontologies and semantics; Security and privacy; Cloud/edge/fog computing; Green computing; and Quantum computing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信