Lina Su, Ruiting Zhou, Ne Wang, Guang Fang, Zong-Qiang Li
{"title":"预算约束下联邦边缘学习中客户端选择的在线学习方法","authors":"Lina Su, Ruiting Zhou, Ne Wang, Guang Fang, Zong-Qiang Li","doi":"10.1145/3545008.3545062","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) has emerged as a new paradigm that enables distributed mobile devices to learn a global model collaboratively. Since mobile devices (a.k.a, clients) exhibit diversity in model training quality, client selection (CS) becomes critical for efficient FL. CS faces the following challenges: First, the client’s availability, the training data volumes, and the network connection status are time-varying and cannot be easily predicted. Second, clients for training and the number of local iterations would seriously affect the model accuracy. Thus, selecting a subset of available clients and controlling local iterations should guarantee model quality. Third, renting clients for model training needs cost. It is necessary to dynamically administrate the use of the long-term budget without knowledge of future inputs. To this end, we propose a federated edge learning (FedL) framework, which can select appropriate clients and control the number of training iterations in real-time. FedL aims to reduce the completion time while reaching the desired model convergence and satisfying the long-term budget for renting clients. FedL consists of two algorithms: i) the online learning algorithm makes CS and iteration decisions according to historic learning results; ii) the online rounding algorithm translates fractional decisions derived by the online learning algorithm into integers to satisfy feasibility constraints. Rigorous mathematical proof reveals that dynamic regret and dynamic fit have sub-linear upper-bounds with time for a given budget. Extensive experiments based on realistic datasets suggest that FedL outperforms multiple state-of-the-art algorithms. In particular, FedL reduces at least 38% completion time compared with others.","PeriodicalId":360504,"journal":{"name":"Proceedings of the 51st International Conference on Parallel Processing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Online Learning Approach for Client Selection in Federated Edge Learning under Budget Constraint\",\"authors\":\"Lina Su, Ruiting Zhou, Ne Wang, Guang Fang, Zong-Qiang Li\",\"doi\":\"10.1145/3545008.3545062\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated learning (FL) has emerged as a new paradigm that enables distributed mobile devices to learn a global model collaboratively. Since mobile devices (a.k.a, clients) exhibit diversity in model training quality, client selection (CS) becomes critical for efficient FL. CS faces the following challenges: First, the client’s availability, the training data volumes, and the network connection status are time-varying and cannot be easily predicted. Second, clients for training and the number of local iterations would seriously affect the model accuracy. Thus, selecting a subset of available clients and controlling local iterations should guarantee model quality. Third, renting clients for model training needs cost. It is necessary to dynamically administrate the use of the long-term budget without knowledge of future inputs. To this end, we propose a federated edge learning (FedL) framework, which can select appropriate clients and control the number of training iterations in real-time. FedL aims to reduce the completion time while reaching the desired model convergence and satisfying the long-term budget for renting clients. FedL consists of two algorithms: i) the online learning algorithm makes CS and iteration decisions according to historic learning results; ii) the online rounding algorithm translates fractional decisions derived by the online learning algorithm into integers to satisfy feasibility constraints. Rigorous mathematical proof reveals that dynamic regret and dynamic fit have sub-linear upper-bounds with time for a given budget. Extensive experiments based on realistic datasets suggest that FedL outperforms multiple state-of-the-art algorithms. In particular, FedL reduces at least 38% completion time compared with others.\",\"PeriodicalId\":360504,\"journal\":{\"name\":\"Proceedings of the 51st International Conference on Parallel Processing\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 51st International Conference on Parallel Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3545008.3545062\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 51st International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3545008.3545062","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Online Learning Approach for Client Selection in Federated Edge Learning under Budget Constraint
Federated learning (FL) has emerged as a new paradigm that enables distributed mobile devices to learn a global model collaboratively. Since mobile devices (a.k.a, clients) exhibit diversity in model training quality, client selection (CS) becomes critical for efficient FL. CS faces the following challenges: First, the client’s availability, the training data volumes, and the network connection status are time-varying and cannot be easily predicted. Second, clients for training and the number of local iterations would seriously affect the model accuracy. Thus, selecting a subset of available clients and controlling local iterations should guarantee model quality. Third, renting clients for model training needs cost. It is necessary to dynamically administrate the use of the long-term budget without knowledge of future inputs. To this end, we propose a federated edge learning (FedL) framework, which can select appropriate clients and control the number of training iterations in real-time. FedL aims to reduce the completion time while reaching the desired model convergence and satisfying the long-term budget for renting clients. FedL consists of two algorithms: i) the online learning algorithm makes CS and iteration decisions according to historic learning results; ii) the online rounding algorithm translates fractional decisions derived by the online learning algorithm into integers to satisfy feasibility constraints. Rigorous mathematical proof reveals that dynamic regret and dynamic fit have sub-linear upper-bounds with time for a given budget. Extensive experiments based on realistic datasets suggest that FedL outperforms multiple state-of-the-art algorithms. In particular, FedL reduces at least 38% completion time compared with others.