Mingda Hu;Jingjing Zhang;Xiong Wang;Shengyun Liu;Zheng Lin
{"title":"Accelerating Federated Learning With Model Segmentation for Edge Networks","authors":"Mingda Hu;Jingjing Zhang;Xiong Wang;Shengyun Liu;Zheng Lin","doi":"10.1109/TGCN.2024.3424552","DOIUrl":null,"url":null,"abstract":"In the rapidly evolving landscape of distributed learning strategies, Federated Learning (FL) stands out for its features such as model training on resource-constrained edge devices and high data security. However, the growing complexity of neural network models produces two challenges such as communication bottleneck and resource under-utilization, especially in edge networks. To overcome these challenges, this paper introduces a novel framework by realizing the Parallel Communication-Computation Federated Learning Mode (P2CFed). Specifically, we design an adaptive layer-wise model segmentation strategy according to the wireless environments and computing capability of edge devices, which enables parallel training and transmission within different sub-models. In this way, parameter delivery takes place throughout the training process, thus considerably alleviating the communication overhead. Meanwhile, we also propose a joint optimization scheme with regard to the subchannel allocation, power control, and segmentation layer selection, which is then transformed into an iteration search process for obtaining optimal results. We have conducted extensive simulations to validate the effectiveness of P2CFed when compared with state-of-the-art benchmarks in terms of communication overhead and resource utilization. It also unveils that P2CFed brings a faster convergence rate and smaller training delay compared to traditional FL approaches.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 1","pages":"242-254"},"PeriodicalIF":5.3000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Green Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10587213/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
In the rapidly evolving landscape of distributed learning strategies, Federated Learning (FL) stands out for its features such as model training on resource-constrained edge devices and high data security. However, the growing complexity of neural network models produces two challenges such as communication bottleneck and resource under-utilization, especially in edge networks. To overcome these challenges, this paper introduces a novel framework by realizing the Parallel Communication-Computation Federated Learning Mode (P2CFed). Specifically, we design an adaptive layer-wise model segmentation strategy according to the wireless environments and computing capability of edge devices, which enables parallel training and transmission within different sub-models. In this way, parameter delivery takes place throughout the training process, thus considerably alleviating the communication overhead. Meanwhile, we also propose a joint optimization scheme with regard to the subchannel allocation, power control, and segmentation layer selection, which is then transformed into an iteration search process for obtaining optimal results. We have conducted extensive simulations to validate the effectiveness of P2CFed when compared with state-of-the-art benchmarks in terms of communication overhead and resource utilization. It also unveils that P2CFed brings a faster convergence rate and smaller training delay compared to traditional FL approaches.