Jie Zhang;Li Chen;Yunfei Chen;Xiaohui Chen;Guo Wei
{"title":"Hierarchically Federated Learning in Wireless Networks: D2D Consensus and Inter-Cell Aggregation","authors":"Jie Zhang;Li Chen;Yunfei Chen;Xiaohui Chen;Guo Wei","doi":"10.1109/TMLCN.2024.3385355","DOIUrl":null,"url":null,"abstract":"Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply DFL to a multi-cell scenario due to inadequate model averaging and cross-cell device-to-device (D2D) communications. In this paper, we propose a hierarchically decentralized federated learning (HDFL) framework that combines intra-cell D2D links between devices and backhaul communications between base stations. In HDFL, devices from different cells collaboratively train a global model using periodic intra-cell D2D consensus and inter-cell aggregation. The strong convergence guarantee of the proposed HDFL algorithm is established even for non-convex objectives. Based on the convergence analysis, we characterize the network topology of each cell, the communication interval of intra-cell consensus and inter-cell aggregation on the training performance. To further improve the performance of HDFL, we optimize the computation capacity selection and bandwidth allocation to minimize the training latency and energy overhead. Numerical results based on the MNIST and CIFAR-10 datasets validate the superiority of HDFL over traditional DFL methods in the multi-cell scenario.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"442-456"},"PeriodicalIF":0.0000,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10491307","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10491307/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Decentralized federated learning (DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply DFL to a multi-cell scenario due to inadequate model averaging and cross-cell device-to-device (D2D) communications. In this paper, we propose a hierarchically decentralized federated learning (HDFL) framework that combines intra-cell D2D links between devices and backhaul communications between base stations. In HDFL, devices from different cells collaboratively train a global model using periodic intra-cell D2D consensus and inter-cell aggregation. The strong convergence guarantee of the proposed HDFL algorithm is established even for non-convex objectives. Based on the convergence analysis, we characterize the network topology of each cell, the communication interval of intra-cell consensus and inter-cell aggregation on the training performance. To further improve the performance of HDFL, we optimize the computation capacity selection and bandwidth allocation to minimize the training latency and energy overhead. Numerical results based on the MNIST and CIFAR-10 datasets validate the superiority of HDFL over traditional DFL methods in the multi-cell scenario.