{"title":"无政府凸联邦学习","authors":"Dongsheng Li, Xiaowen Gong","doi":"10.1109/INFOCOMWKSHPS57453.2023.10225908","DOIUrl":null,"url":null,"abstract":"The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFLC, an Anarchic Federated Learning algorithm for Convex learning problems, which gives maximum freedom to clients. In particular, AFLC allows clients to 1) participate in arbitrary rounds; 2) participate asynchronously; 3) participate with arbitrary numbers of local iterations. The proposed AFLC algorithm enables clients to participate in FL efficiently and flexibly according to their needs, e.g., based on their heterogeneous and time-varying computation and communication capabilities. We characterize performance bounds on the learning loss of AFLC as a function of clients' local model delays and local iteration numbers. Our results show that the convergence error can be made arbitrarily small by choosing appropriate learning rates, and the convergence rate matches that of existing benchmarks. The results also characterize the impacts of clients' various parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.","PeriodicalId":354290,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Anarchic Convex Federated Learning\",\"authors\":\"Dongsheng Li, Xiaowen Gong\",\"doi\":\"10.1109/INFOCOMWKSHPS57453.2023.10225908\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFLC, an Anarchic Federated Learning algorithm for Convex learning problems, which gives maximum freedom to clients. In particular, AFLC allows clients to 1) participate in arbitrary rounds; 2) participate asynchronously; 3) participate with arbitrary numbers of local iterations. The proposed AFLC algorithm enables clients to participate in FL efficiently and flexibly according to their needs, e.g., based on their heterogeneous and time-varying computation and communication capabilities. We characterize performance bounds on the learning loss of AFLC as a function of clients' local model delays and local iteration numbers. Our results show that the convergence error can be made arbitrarily small by choosing appropriate learning rates, and the convergence rate matches that of existing benchmarks. The results also characterize the impacts of clients' various parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.\",\"PeriodicalId\":354290,\"journal\":{\"name\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"volume\":\"191 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10225908\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10225908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFLC, an Anarchic Federated Learning algorithm for Convex learning problems, which gives maximum freedom to clients. In particular, AFLC allows clients to 1) participate in arbitrary rounds; 2) participate asynchronously; 3) participate with arbitrary numbers of local iterations. The proposed AFLC algorithm enables clients to participate in FL efficiently and flexibly according to their needs, e.g., based on their heterogeneous and time-varying computation and communication capabilities. We characterize performance bounds on the learning loss of AFLC as a function of clients' local model delays and local iteration numbers. Our results show that the convergence error can be made arbitrarily small by choosing appropriate learning rates, and the convergence rate matches that of existing benchmarks. The results also characterize the impacts of clients' various parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.