Wenchao Xia, Bo Xu, Haitao Zhao, Yongxu Zhu, Xinghua Sun, Tony Q. S. Quek
{"title":"Optimization of Clustering Strategy and Resource Allocation for Clustered Federated Learning","authors":"Wenchao Xia, Bo Xu, Haitao Zhao, Yongxu Zhu, Xinghua Sun, Tony Q. S. Quek","doi":"10.1109/GLOBECOM48099.2022.10001540","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) framework enables user devices collaboratively train a global model based on their local datasets without privacy leak. However, the training performance of FL is degraded when the data distributions of different devices are incongruent. Fueled by this issue, we consider a clustered FL (CFL) method where the devices are divided into several clusters according to their data distributions and are trained simultaneously. Convergence analysis is conducted, which shows that the clustered model performance depends on cosine similarity, device number per cluster, and device participation probability. Then, aiming at optimizing the model training performance, a joint problem of resource allocation and device clustering is formulated, which is solved by decoupling it into two sub-problems. Specifically, a coalition formation algorithm is proposed for the device clustering sub-problem, and the sub-problem of bandwidth allocation and transmit power control is solved directly due to its convexity. Finally, simulation experiments are conducted on the MNIST dataset to validate the performance of the proposed algorithm in terms of test accuracy.","PeriodicalId":313199,"journal":{"name":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GLOBECOM 2022 - 2022 IEEE Global Communications Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM48099.2022.10001540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) framework enables user devices collaboratively train a global model based on their local datasets without privacy leak. However, the training performance of FL is degraded when the data distributions of different devices are incongruent. Fueled by this issue, we consider a clustered FL (CFL) method where the devices are divided into several clusters according to their data distributions and are trained simultaneously. Convergence analysis is conducted, which shows that the clustered model performance depends on cosine similarity, device number per cluster, and device participation probability. Then, aiming at optimizing the model training performance, a joint problem of resource allocation and device clustering is formulated, which is solved by decoupling it into two sub-problems. Specifically, a coalition formation algorithm is proposed for the device clustering sub-problem, and the sub-problem of bandwidth allocation and transmit power control is solved directly due to its convexity. Finally, simulation experiments are conducted on the MNIST dataset to validate the performance of the proposed algorithm in terms of test accuracy.