An Online Learning Approach for Client Selection in Federated Edge Learning under Budget Constraint

Lina Su, Ruiting Zhou, Ne Wang, Guang Fang, Zong-Qiang Li
{"title":"An Online Learning Approach for Client Selection in Federated Edge Learning under Budget Constraint","authors":"Lina Su, Ruiting Zhou, Ne Wang, Guang Fang, Zong-Qiang Li","doi":"10.1145/3545008.3545062","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has emerged as a new paradigm that enables distributed mobile devices to learn a global model collaboratively. Since mobile devices (a.k.a, clients) exhibit diversity in model training quality, client selection (CS) becomes critical for efficient FL. CS faces the following challenges: First, the client’s availability, the training data volumes, and the network connection status are time-varying and cannot be easily predicted. Second, clients for training and the number of local iterations would seriously affect the model accuracy. Thus, selecting a subset of available clients and controlling local iterations should guarantee model quality. Third, renting clients for model training needs cost. It is necessary to dynamically administrate the use of the long-term budget without knowledge of future inputs. To this end, we propose a federated edge learning (FedL) framework, which can select appropriate clients and control the number of training iterations in real-time. FedL aims to reduce the completion time while reaching the desired model convergence and satisfying the long-term budget for renting clients. FedL consists of two algorithms: i) the online learning algorithm makes CS and iteration decisions according to historic learning results; ii) the online rounding algorithm translates fractional decisions derived by the online learning algorithm into integers to satisfy feasibility constraints. Rigorous mathematical proof reveals that dynamic regret and dynamic fit have sub-linear upper-bounds with time for a given budget. Extensive experiments based on realistic datasets suggest that FedL outperforms multiple state-of-the-art algorithms. In particular, FedL reduces at least 38% completion time compared with others.","PeriodicalId":360504,"journal":{"name":"Proceedings of the 51st International Conference on Parallel Processing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 51st International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3545008.3545062","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Federated learning (FL) has emerged as a new paradigm that enables distributed mobile devices to learn a global model collaboratively. Since mobile devices (a.k.a, clients) exhibit diversity in model training quality, client selection (CS) becomes critical for efficient FL. CS faces the following challenges: First, the client’s availability, the training data volumes, and the network connection status are time-varying and cannot be easily predicted. Second, clients for training and the number of local iterations would seriously affect the model accuracy. Thus, selecting a subset of available clients and controlling local iterations should guarantee model quality. Third, renting clients for model training needs cost. It is necessary to dynamically administrate the use of the long-term budget without knowledge of future inputs. To this end, we propose a federated edge learning (FedL) framework, which can select appropriate clients and control the number of training iterations in real-time. FedL aims to reduce the completion time while reaching the desired model convergence and satisfying the long-term budget for renting clients. FedL consists of two algorithms: i) the online learning algorithm makes CS and iteration decisions according to historic learning results; ii) the online rounding algorithm translates fractional decisions derived by the online learning algorithm into integers to satisfy feasibility constraints. Rigorous mathematical proof reveals that dynamic regret and dynamic fit have sub-linear upper-bounds with time for a given budget. Extensive experiments based on realistic datasets suggest that FedL outperforms multiple state-of-the-art algorithms. In particular, FedL reduces at least 38% completion time compared with others.
预算约束下联邦边缘学习中客户端选择的在线学习方法
联邦学习(FL)已经成为一种新的范例,它使分布式移动设备能够协作地学习全局模型。由于移动设备(即客户端)在模型训练质量上表现出多样性,客户端选择(CS)对于高效的模型训练至关重要。CS面临以下挑战:首先,客户端的可用性、训练数据量和网络连接状态是时变的,不容易预测。其次,训练客户端和局部迭代次数会严重影响模型的准确性。因此,选择可用客户机的子集并控制局部迭代应该可以保证模型质量。第三,租赁模特培训客户需要成本。有必要在不了解未来投入的情况下动态管理长期预算的使用。为此,我们提出了一个联邦边缘学习框架,该框架可以实时选择合适的客户端并控制训练迭代次数。FedL旨在减少完成时间,同时达到预期的模型收敛性,满足租赁客户的长期预算。FedL包括两种算法:i)在线学习算法根据历史学习结果进行CS和迭代决策;Ii)在线舍入算法将在线学习算法导出的分数决策转换为整数,以满足可行性约束。严格的数学证明表明,对于给定的预算,动态后悔和动态拟合随时间有亚线性上界。基于实际数据集的大量实验表明,FedL优于多种最先进的算法。特别是,与其他工具相比,FedL至少减少了38%的完成时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信