Shi Wenqi, Sun Yuxuan, Huang Xiufeng, Zhou Sheng, Niu Zhisheng
{"title":"Scheduling Policies for Federated Learning in Wireless Networks: An Overview","authors":"Shi Wenqi, Sun Yuxuan, Huang Xiufeng, Zhou Sheng, Niu Zhisheng","doi":"10.12142/ZTECOM.202002003","DOIUrl":null,"url":null,"abstract":"Due to the increasing need for massive data analysis and machine learning model training at the network edge, as well as the rising concerns about data privacy, a new distrib⁃ uted training framework called federated learning (FL) has emerged and attracted much at⁃ tention from both academia and industry. In FL, participating devices iteratively update the local models based on their own data and contribute to the global training by uploading mod⁃ el updates until the training converges. Therefore, the computation capabilities of mobile de⁃ vices can be utilized and the data privacy can be preserved. However, deploying FL in re⁃ source-constrained wireless networks encounters several challenges, including the limited energy of mobile devices, weak onboard computing capability, and scarce wireless band⁃ width. To address these challenges, recent solutions have been proposed to maximize the convergence rate or minimize the energy consumption under heterogeneous constraints. In this overview, we first introduce the backgrounds and fundamentals of FL. Then, the key challenges in deploying FL in wireless networks are discussed, and several existing solu⁃ tions are reviewed. Finally, we highlight the open issues and future research directions in FL scheduling.","PeriodicalId":61991,"journal":{"name":"ZTE Communications","volume":"18 1","pages":"11-19"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ZTE Communications","FirstCategoryId":"1093","ListUrlMain":"https://doi.org/10.12142/ZTECOM.202002003","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Due to the increasing need for massive data analysis and machine learning model training at the network edge, as well as the rising concerns about data privacy, a new distrib⁃ uted training framework called federated learning (FL) has emerged and attracted much at⁃ tention from both academia and industry. In FL, participating devices iteratively update the local models based on their own data and contribute to the global training by uploading mod⁃ el updates until the training converges. Therefore, the computation capabilities of mobile de⁃ vices can be utilized and the data privacy can be preserved. However, deploying FL in re⁃ source-constrained wireless networks encounters several challenges, including the limited energy of mobile devices, weak onboard computing capability, and scarce wireless band⁃ width. To address these challenges, recent solutions have been proposed to maximize the convergence rate or minimize the energy consumption under heterogeneous constraints. In this overview, we first introduce the backgrounds and fundamentals of FL. Then, the key challenges in deploying FL in wireless networks are discussed, and several existing solu⁃ tions are reviewed. Finally, we highlight the open issues and future research directions in FL scheduling.