{"title":"Federated Learning at Scale: Addressing Client Intermittency and Resource Constraints","authors":"Mónica Ribero;Haris Vikalo;Gustavo de Veciana","doi":"10.1109/JSTSP.2024.3430118","DOIUrl":null,"url":null,"abstract":"In federated learning systems, a server coordinates the training of machine learning models on data distributed across a number of participating client devices. In each round of training, the server selects a subset of devices to perform model updates and, in turn, aggregates those updates before proceeding to the next round of training. Most state-of-the-art federated learning algorithms assume that the clients are always available to perform training – an assumption readily violated in many practical settings where client availability is intermittent or even transient; moreover, in systems where the server samples from an exceedingly large number of clients, a client will likely participate in at most one round of training. This can lead to biasing the learned global model towards client groups endowed with more resources. In this paper, we consider systems where the clients are naturally grouped based on their data distributions, and the groups exhibit variations in the number of available clients. We present <sc>Flics-opt</small>, an algorithm for large-scale federated learning over heterogeneous data distributions, time-varying client availability and further constraints on client participation reflecting, e.g., overall energy efficiency objectives that should be met to achieve sustainable deployment. In particular, <sc>Flics-opt</small> dynamically learns a selection policy that adapts to client availability patterns and communication constraints, ensuring per-group long-term participation which minimizes the variance inevitably introduced into the learning process by client sampling. We show that for non-convex smooth functions <sc>Flics-opt</small> coupled with SGD converges at <inline-formula><tex-math>$O(1/\\sqrt{T})$</tex-math></inline-formula> rate, matching the state-of-the-art convergence results which require clients to be always available. We test <sc>Flics-opt</small> on three realistic federated datasets and show that, in terms of maximum accuracy, <sc>Flics-Avg</small> and <sc>Flics-Adam</small> outperform traditional <sc>FedAvg</small> by up to 280% and 60%, respectively, while exhibiting robustness in face of heterogeneous data distributions.","PeriodicalId":13038,"journal":{"name":"IEEE Journal of Selected Topics in Signal Processing","volume":"19 1","pages":"60-73"},"PeriodicalIF":8.7000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10601165/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
In federated learning systems, a server coordinates the training of machine learning models on data distributed across a number of participating client devices. In each round of training, the server selects a subset of devices to perform model updates and, in turn, aggregates those updates before proceeding to the next round of training. Most state-of-the-art federated learning algorithms assume that the clients are always available to perform training – an assumption readily violated in many practical settings where client availability is intermittent or even transient; moreover, in systems where the server samples from an exceedingly large number of clients, a client will likely participate in at most one round of training. This can lead to biasing the learned global model towards client groups endowed with more resources. In this paper, we consider systems where the clients are naturally grouped based on their data distributions, and the groups exhibit variations in the number of available clients. We present Flics-opt, an algorithm for large-scale federated learning over heterogeneous data distributions, time-varying client availability and further constraints on client participation reflecting, e.g., overall energy efficiency objectives that should be met to achieve sustainable deployment. In particular, Flics-opt dynamically learns a selection policy that adapts to client availability patterns and communication constraints, ensuring per-group long-term participation which minimizes the variance inevitably introduced into the learning process by client sampling. We show that for non-convex smooth functions Flics-opt coupled with SGD converges at $O(1/\sqrt{T})$ rate, matching the state-of-the-art convergence results which require clients to be always available. We test Flics-opt on three realistic federated datasets and show that, in terms of maximum accuracy, Flics-Avg and Flics-Adam outperform traditional FedAvg by up to 280% and 60%, respectively, while exhibiting robustness in face of heterogeneous data distributions.
期刊介绍:
The IEEE Journal of Selected Topics in Signal Processing (JSTSP) focuses on the Field of Interest of the IEEE Signal Processing Society, which encompasses the theory and application of various signal processing techniques. These techniques include filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals using digital or analog devices. The term "signal" covers a wide range of data types, including audio, video, speech, image, communication, geophysical, sonar, radar, medical, musical, and others.
The journal format allows for in-depth exploration of signal processing topics, enabling the Society to cover both established and emerging areas. This includes interdisciplinary fields such as biomedical engineering and language processing, as well as areas not traditionally associated with engineering.