J. Parra-Ullauri, Xunzheng Zhang, A. Bravalheri, R. Nejabati, D. Simeonidou
{"title":"基于Flower和Optuna的联邦超参数优化","authors":"J. Parra-Ullauri, Xunzheng Zhang, A. Bravalheri, R. Nejabati, D. Simeonidou","doi":"10.1145/3555776.3577847","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is an emerging distributed machine learning technique in which multiple clients collaborate to learn a model under the management of a central server. An FL system depends on a set of initial conditions (i.e., hyperparameters) that affect the system's performance. However, defining a good choice of hyperparameters for the central server and clients is a challenging problem. Hyperparameter tuning in FL often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for server and client models, making the tuning process computationally expensive and time-consuming. We propose an implementation based on integrating the FL framework Flower, and the prime optimisation software Optuna for automated and efficient hyperparameter optimisation (HPO) in FL. Through this combination, it is possible to tune hyperparameters in both clients and server online, aiming to find the optimal values at runtime. We introduce the HPO factor to describe the number of rounds that the HPO will take place, and the HPO rate that defines the frequency for updating the hyperparameters and can be used for pruning. The HPO is managed by the FL server which updates clients' hyperparameters, with an HPO rate, using state-of-the-art optimisation algorithms enabled by Optuna. We tested our approach by updating multiple client models simultaneously in popular image recognition datasets which produced promising results compared to baselines.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":0.4000,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated Hyperparameter Optimisation with Flower and Optuna\",\"authors\":\"J. Parra-Ullauri, Xunzheng Zhang, A. Bravalheri, R. Nejabati, D. Simeonidou\",\"doi\":\"10.1145/3555776.3577847\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) is an emerging distributed machine learning technique in which multiple clients collaborate to learn a model under the management of a central server. An FL system depends on a set of initial conditions (i.e., hyperparameters) that affect the system's performance. However, defining a good choice of hyperparameters for the central server and clients is a challenging problem. Hyperparameter tuning in FL often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for server and client models, making the tuning process computationally expensive and time-consuming. We propose an implementation based on integrating the FL framework Flower, and the prime optimisation software Optuna for automated and efficient hyperparameter optimisation (HPO) in FL. Through this combination, it is possible to tune hyperparameters in both clients and server online, aiming to find the optimal values at runtime. We introduce the HPO factor to describe the number of rounds that the HPO will take place, and the HPO rate that defines the frequency for updating the hyperparameters and can be used for pruning. The HPO is managed by the FL server which updates clients' hyperparameters, with an HPO rate, using state-of-the-art optimisation algorithms enabled by Optuna. We tested our approach by updating multiple client models simultaneously in popular image recognition datasets which produced promising results compared to baselines.\",\"PeriodicalId\":42971,\"journal\":{\"name\":\"Applied Computing Review\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.4000,\"publicationDate\":\"2023-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Computing Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3555776.3577847\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Computing Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3555776.3577847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Federated Hyperparameter Optimisation with Flower and Optuna
Federated learning (FL) is an emerging distributed machine learning technique in which multiple clients collaborate to learn a model under the management of a central server. An FL system depends on a set of initial conditions (i.e., hyperparameters) that affect the system's performance. However, defining a good choice of hyperparameters for the central server and clients is a challenging problem. Hyperparameter tuning in FL often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for server and client models, making the tuning process computationally expensive and time-consuming. We propose an implementation based on integrating the FL framework Flower, and the prime optimisation software Optuna for automated and efficient hyperparameter optimisation (HPO) in FL. Through this combination, it is possible to tune hyperparameters in both clients and server online, aiming to find the optimal values at runtime. We introduce the HPO factor to describe the number of rounds that the HPO will take place, and the HPO rate that defines the frequency for updating the hyperparameters and can be used for pruning. The HPO is managed by the FL server which updates clients' hyperparameters, with an HPO rate, using state-of-the-art optimisation algorithms enabled by Optuna. We tested our approach by updating multiple client models simultaneously in popular image recognition datasets which produced promising results compared to baselines.