Congzhou Li, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, Cheng Li
{"title":"An Efficient Multi-Model Training Algorithm for Federated Learning","authors":"Congzhou Li, Chunxi Li, Yongxiang Zhao, Baoxian Zhang, Cheng Li","doi":"10.1109/GLOBECOM46510.2021.9685230","DOIUrl":null,"url":null,"abstract":"How to effectively organize various heterogeneous clients for effective model training has been a critical issue in federated learning. Existing algorithms in this aspect are all for single model training and are not suitable for parallel multi-model training due to the inefficient utilization of resources at the powerful clients. In this paper, we study the issue of multi-model training in federated learning. The objective is to effectively utilize the heterogeneous resources at clients for parallel multi-model training and therefore maximize the overall training efficiency while ensuring a certain fairness among individual models. For this purpose, we introduce a logarithmic function to characterize the relationship between the model training accuracy and the number of clients involved in the training based on measurement results. We accordingly formulate the multi-model training as an optimization problem to find an assignment to maximize the overall training efficiency while ensuring a log fairness among individual models. We design a Logarithmic Fairness based Multi-model Balancing algorithm (LFMB), which iteratively replaces the already assigned models with a not-assigned model at each client for improving the training efficiency, until no such improvement can be found. Numerical results demonstrate the significantly high performance of LFMB in terms of overall training efficiency and fairness.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9685230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
How to effectively organize various heterogeneous clients for effective model training has been a critical issue in federated learning. Existing algorithms in this aspect are all for single model training and are not suitable for parallel multi-model training due to the inefficient utilization of resources at the powerful clients. In this paper, we study the issue of multi-model training in federated learning. The objective is to effectively utilize the heterogeneous resources at clients for parallel multi-model training and therefore maximize the overall training efficiency while ensuring a certain fairness among individual models. For this purpose, we introduce a logarithmic function to characterize the relationship between the model training accuracy and the number of clients involved in the training based on measurement results. We accordingly formulate the multi-model training as an optimization problem to find an assignment to maximize the overall training efficiency while ensuring a log fairness among individual models. We design a Logarithmic Fairness based Multi-model Balancing algorithm (LFMB), which iteratively replaces the already assigned models with a not-assigned model at each client for improving the training efficiency, until no such improvement can be found. Numerical results demonstrate the significantly high performance of LFMB in terms of overall training efficiency and fairness.