{"title":"利用自适应量化加速无单元网络的节能联邦学习","authors":"Afsaneh Mahmoudi;Ming Xiao;Emil Björnson","doi":"10.1109/TMLCN.2025.3583659","DOIUrl":null,"url":null,"abstract":"Federated Learning (FL) enables clients to share model parameters instead of raw data, reducing communication overhead. Traditional wireless networks, however, suffer from latency issues when supporting FL. Cell-Free Massive MIMO (CFmMIMO) offers a promising alternative, as it can serve multiple clients simultaneously on shared resources, enhancing spectral efficiency and reducing latency in large-scale FL. Still, communication resource constraints at the client side can impede the completion of FL training. To tackle this issue, we propose a low-latency, energy-efficient FL framework with optimized uplink power allocation for efficient uplink communication. Our approach integrates an adaptive quantization strategy that dynamically adjusts bit allocation for local gradient updates, significantly lowering communication cost. We formulate a joint optimization problem involving FL model updates, local iterations, and power allocation. This problem is solved using sequential quadratic programming (SQP) to balance energy consumption and latency. Moreover, for local model training, clients employ the AdaDelta optimizer, which improves convergence compared to standard SGD, Adam, and RMSProp. We also provide a theoretical analysis of FL convergence under AdaDelta. Numerical results demonstrate that, under equal energy and latency budgets, our power allocation strategy improves test accuracy by up to 7% and 19% compared to Dinkelbach and max-sum rate approaches. Furthermore, across all power allocation methods, our quantization scheme outperforms AQUILA and LAQ, increasing test accuracy by up to 36% and 35%, respectively.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"3 ","pages":"761-778"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052837","citationCount":"0","resultStr":"{\"title\":\"Accelerating Energy-Efficient Federated Learning in Cell-Free Networks With Adaptive Quantization\",\"authors\":\"Afsaneh Mahmoudi;Ming Xiao;Emil Björnson\",\"doi\":\"10.1109/TMLCN.2025.3583659\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Federated Learning (FL) enables clients to share model parameters instead of raw data, reducing communication overhead. Traditional wireless networks, however, suffer from latency issues when supporting FL. Cell-Free Massive MIMO (CFmMIMO) offers a promising alternative, as it can serve multiple clients simultaneously on shared resources, enhancing spectral efficiency and reducing latency in large-scale FL. Still, communication resource constraints at the client side can impede the completion of FL training. To tackle this issue, we propose a low-latency, energy-efficient FL framework with optimized uplink power allocation for efficient uplink communication. Our approach integrates an adaptive quantization strategy that dynamically adjusts bit allocation for local gradient updates, significantly lowering communication cost. We formulate a joint optimization problem involving FL model updates, local iterations, and power allocation. This problem is solved using sequential quadratic programming (SQP) to balance energy consumption and latency. Moreover, for local model training, clients employ the AdaDelta optimizer, which improves convergence compared to standard SGD, Adam, and RMSProp. We also provide a theoretical analysis of FL convergence under AdaDelta. Numerical results demonstrate that, under equal energy and latency budgets, our power allocation strategy improves test accuracy by up to 7% and 19% compared to Dinkelbach and max-sum rate approaches. Furthermore, across all power allocation methods, our quantization scheme outperforms AQUILA and LAQ, increasing test accuracy by up to 36% and 35%, respectively.\",\"PeriodicalId\":100641,\"journal\":{\"name\":\"IEEE Transactions on Machine Learning in Communications and Networking\",\"volume\":\"3 \",\"pages\":\"761-778\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11052837\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Machine Learning in Communications and Networking\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11052837/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11052837/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Accelerating Energy-Efficient Federated Learning in Cell-Free Networks With Adaptive Quantization
Federated Learning (FL) enables clients to share model parameters instead of raw data, reducing communication overhead. Traditional wireless networks, however, suffer from latency issues when supporting FL. Cell-Free Massive MIMO (CFmMIMO) offers a promising alternative, as it can serve multiple clients simultaneously on shared resources, enhancing spectral efficiency and reducing latency in large-scale FL. Still, communication resource constraints at the client side can impede the completion of FL training. To tackle this issue, we propose a low-latency, energy-efficient FL framework with optimized uplink power allocation for efficient uplink communication. Our approach integrates an adaptive quantization strategy that dynamically adjusts bit allocation for local gradient updates, significantly lowering communication cost. We formulate a joint optimization problem involving FL model updates, local iterations, and power allocation. This problem is solved using sequential quadratic programming (SQP) to balance energy consumption and latency. Moreover, for local model training, clients employ the AdaDelta optimizer, which improves convergence compared to standard SGD, Adam, and RMSProp. We also provide a theoretical analysis of FL convergence under AdaDelta. Numerical results demonstrate that, under equal energy and latency budgets, our power allocation strategy improves test accuracy by up to 7% and 19% compared to Dinkelbach and max-sum rate approaches. Furthermore, across all power allocation methods, our quantization scheme outperforms AQUILA and LAQ, increasing test accuracy by up to 36% and 35%, respectively.