Xinran Zhang;Hui Tian;Wanli Ni;Zhaohui Yang;Mengying Sun
{"title":"Deep Reinforcement Learning for Energy Efficiency Maximization in SWIPT-Based Over-the-Air Federated Learning","authors":"Xinran Zhang;Hui Tian;Wanli Ni;Zhaohui Yang;Mengying Sun","doi":"10.1109/TGCN.2023.3307428","DOIUrl":null,"url":null,"abstract":"Federated learning (FL) is a promising solution for preserving user privacy in Internet of Things (IoT) networks thanks to its distributed computing feature. Furthermore, over-the-air FL (AirFL) can leverage the superposition property of wireless channels to achieve fast model aggregation through concurrent analog transmissions. To make AirFL sustainable for energy-constrained IoT devices, we apply simultaneous wireless information and power transfer (SWIPT) at the base station to broadcast the global model and charge local devices during the model training process. To characterize the optimality gap between the aggregated FL model and the ideal FL model brought by signal misalignment, channel fading, and random noise in the model distribution and aggregation processes, we prove the convergence of SWIPT-based AirFL to show the precise impact of up- and down-link communications on the learning performance. We formulate a long-term energy efficiency (EE) maximization problem and propose a deep reinforcement learning algorithm with a collaborative double-agent approach to optimize resource allocation strategies while guaranteeing learning performance. Numerical results demonstrate that the proposed algorithm can achieve a maximum of 41% improvement in EE under various network settings compared with benchmark schemes, and the learning performance of SWIPT-based AirFL can be improved significantly by alleviating transmission errors.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"8 1","pages":"525-541"},"PeriodicalIF":5.3000,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Green Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10227329/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Federated learning (FL) is a promising solution for preserving user privacy in Internet of Things (IoT) networks thanks to its distributed computing feature. Furthermore, over-the-air FL (AirFL) can leverage the superposition property of wireless channels to achieve fast model aggregation through concurrent analog transmissions. To make AirFL sustainable for energy-constrained IoT devices, we apply simultaneous wireless information and power transfer (SWIPT) at the base station to broadcast the global model and charge local devices during the model training process. To characterize the optimality gap between the aggregated FL model and the ideal FL model brought by signal misalignment, channel fading, and random noise in the model distribution and aggregation processes, we prove the convergence of SWIPT-based AirFL to show the precise impact of up- and down-link communications on the learning performance. We formulate a long-term energy efficiency (EE) maximization problem and propose a deep reinforcement learning algorithm with a collaborative double-agent approach to optimize resource allocation strategies while guaranteeing learning performance. Numerical results demonstrate that the proposed algorithm can achieve a maximum of 41% improvement in EE under various network settings compared with benchmark schemes, and the learning performance of SWIPT-based AirFL can be improved significantly by alleviating transmission errors.