无政府凸联邦学习

Dongsheng Li, Xiaowen Gong
{"title":"无政府凸联邦学习","authors":"Dongsheng Li, Xiaowen Gong","doi":"10.1109/INFOCOMWKSHPS57453.2023.10225908","DOIUrl":null,"url":null,"abstract":"The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFLC, an Anarchic Federated Learning algorithm for Convex learning problems, which gives maximum freedom to clients. In particular, AFLC allows clients to 1) participate in arbitrary rounds; 2) participate asynchronously; 3) participate with arbitrary numbers of local iterations. The proposed AFLC algorithm enables clients to participate in FL efficiently and flexibly according to their needs, e.g., based on their heterogeneous and time-varying computation and communication capabilities. We characterize performance bounds on the learning loss of AFLC as a function of clients' local model delays and local iteration numbers. Our results show that the convergence error can be made arbitrarily small by choosing appropriate learning rates, and the convergence rate matches that of existing benchmarks. The results also characterize the impacts of clients' various parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.","PeriodicalId":354290,"journal":{"name":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Anarchic Convex Federated Learning\",\"authors\":\"Dongsheng Li, Xiaowen Gong\",\"doi\":\"10.1109/INFOCOMWKSHPS57453.2023.10225908\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFLC, an Anarchic Federated Learning algorithm for Convex learning problems, which gives maximum freedom to clients. In particular, AFLC allows clients to 1) participate in arbitrary rounds; 2) participate asynchronously; 3) participate with arbitrary numbers of local iterations. The proposed AFLC algorithm enables clients to participate in FL efficiently and flexibly according to their needs, e.g., based on their heterogeneous and time-varying computation and communication capabilities. We characterize performance bounds on the learning loss of AFLC as a function of clients' local model delays and local iteration numbers. Our results show that the convergence error can be made arbitrarily small by choosing appropriate learning rates, and the convergence rate matches that of existing benchmarks. The results also characterize the impacts of clients' various parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.\",\"PeriodicalId\":354290,\"journal\":{\"name\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"volume\":\"191 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10225908\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INFOCOMWKSHPS57453.2023.10225908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,联邦学习(FL)的快速发展激发了对这一新兴课题的大量研究。现有的FL工作通常假设客户端以某种特定的模式参与学习过程(例如均衡参与),和/或以同步的方式,和/或以相同数量的局部迭代,而这些假设在实践中很难坚持。在本文中,我们提出了AFLC,一种用于凸学习问题的无政府联邦学习算法,它给客户提供了最大的自由度。特别是,AFLC允许客户1)参与任意回合;2)异步参与;3)参与任意次数的局部迭代。本文提出的AFLC算法使客户端能够根据自己的需求,例如基于异构和时变的计算和通信能力,高效灵活地参与FL。我们将AFLC学习损失的性能界限描述为客户端局部模型延迟和局部迭代次数的函数。我们的研究结果表明,通过选择适当的学习速率可以使收敛误差任意小,并且收敛速度与现有基准的收敛速度相匹配。结果还描述了客户的各种参数对学习损失的影响,这提供了有用的见解。数值结果表明了该算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Anarchic Convex Federated Learning
The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFLC, an Anarchic Federated Learning algorithm for Convex learning problems, which gives maximum freedom to clients. In particular, AFLC allows clients to 1) participate in arbitrary rounds; 2) participate asynchronously; 3) participate with arbitrary numbers of local iterations. The proposed AFLC algorithm enables clients to participate in FL efficiently and flexibly according to their needs, e.g., based on their heterogeneous and time-varying computation and communication capabilities. We characterize performance bounds on the learning loss of AFLC as a function of clients' local model delays and local iteration numbers. Our results show that the convergence error can be made arbitrarily small by choosing appropriate learning rates, and the convergence rate matches that of existing benchmarks. The results also characterize the impacts of clients' various parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信