Federated Learning for Heterogeneous Mobile Edge Device: A Client Selection Game

Tongfei Liu, Hongya Wang, M. Ma
{"title":"Federated Learning for Heterogeneous Mobile Edge Device: A Client Selection Game","authors":"Tongfei Liu, Hongya Wang, M. Ma","doi":"10.1109/MSN57253.2022.00145","DOIUrl":null,"url":null,"abstract":"In the federated learning (FL) paradigm, edge devices use local datasets to participate in machine learning model training, and servers are responsible for aggregating and maintaining public models. FL cannot only solve the bandwidth limitation problem of centralized training, but also protect data privacy. However, it is difficult for heterogeneous edge devices to obtain optimal learning performance due to limited computing and communication resources. Specifically, in each round of the global aggregation process by the FL, clients in a ‘strong group’ have a greater chance to contribute their own local training results, while those clients in a ‘weak group’ have a low opportunity to participate, resulting in a negative impact on the final training result. In this paper, we consider a federated learning multi-client selection (FL-MCS) problem, which is an NP-hard problem. To find the optimal solution, we model the FL global aggregation process for clients participation as a potential game. In this game, each client will selfishly decide whether to participate in the FL global aggregation process based on its efforts and rewards. By the potential game, we prove that the competition among clients eventually reaches a stationary state, i.e. the Nash equilibrium point. We also design a distributed heuristic FL multi-client selection algorithm to achieve the maximum reward for the client in a finite number of iterations. Extensive numerical experiments prove the effectiveness of the algorithm.","PeriodicalId":114459,"journal":{"name":"2022 18th International Conference on Mobility, Sensing and Networking (MSN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 18th International Conference on Mobility, Sensing and Networking (MSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MSN57253.2022.00145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In the federated learning (FL) paradigm, edge devices use local datasets to participate in machine learning model training, and servers are responsible for aggregating and maintaining public models. FL cannot only solve the bandwidth limitation problem of centralized training, but also protect data privacy. However, it is difficult for heterogeneous edge devices to obtain optimal learning performance due to limited computing and communication resources. Specifically, in each round of the global aggregation process by the FL, clients in a ‘strong group’ have a greater chance to contribute their own local training results, while those clients in a ‘weak group’ have a low opportunity to participate, resulting in a negative impact on the final training result. In this paper, we consider a federated learning multi-client selection (FL-MCS) problem, which is an NP-hard problem. To find the optimal solution, we model the FL global aggregation process for clients participation as a potential game. In this game, each client will selfishly decide whether to participate in the FL global aggregation process based on its efforts and rewards. By the potential game, we prove that the competition among clients eventually reaches a stationary state, i.e. the Nash equilibrium point. We also design a distributed heuristic FL multi-client selection algorithm to achieve the maximum reward for the client in a finite number of iterations. Extensive numerical experiments prove the effectiveness of the algorithm.
面向异构移动边缘设备的联邦学习:一个客户端选择博弈
在联邦学习(FL)范例中,边缘设备使用本地数据集参与机器学习模型训练,服务器负责聚合和维护公共模型。FL既能解决集中训练的带宽限制问题,又能保护数据隐私。然而,由于计算和通信资源的限制,异构边缘设备难以获得最佳的学习性能。具体而言,在FL的每一轮全球聚合过程中,“强组”的客户有更大的机会贡献自己的本地培训结果,而“弱组”的客户参与的机会较低,从而对最终的培训结果产生负面影响。本文考虑了一个np困难的联邦学习多客户端选择(FL-MCS)问题。为了找到最优解,我们将客户参与的FL全局聚合过程建模为潜在的博弈。在这个博弈中,每个客户端都会根据自己的付出和回报,自私地决定是否参与FL全局聚合过程。通过潜在博弈,我们证明了客户之间的竞争最终达到一个平稳状态,即纳什均衡点。我们还设计了一种分布式启发式FL多客户端选择算法,以在有限次迭代中实现客户端的最大回报。大量的数值实验证明了该算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信