{"title":"A Reinforcement Learning-Based Stochastic Game for Energy-Efficient UAV Swarm-Assisted MEC With Dynamic Clustering and Scheduling","authors":"Jialiuyuan Li;Changyan Yi;Jiayuan Chen;You Shi;Tong Zhang;Xiaolong Li;Ran Wang;Kun Zhu","doi":"10.1109/TGCN.2024.3424449","DOIUrl":null,"url":null,"abstract":"In this paper, we study the energy-efficient unmanned aerial vehicle (UAV) swarm assisted mobile edge computing (MEC) with dynamic clustering and scheduling. In the considered system model, UAVs are divided into multiple swarms, with each swarm consisting of a leader UAV and several follower UAVs. These UAVs serve as mobile edge servers, providing computing services to their covered ground end-users. Unlike existing works, we allow UAVs to dynamically cluster into different swarms, in other words, each follower UAV can change its leader based on the time-varying spatial positions, updated application placement, etc. in a dynamic manner. With the objective of maximizing the long-term energy efficiency of the UAV swarm assisted MEC system, a joint optimization problem of UAV swarm dynamic clustering and scheduling is formulated. Considering the inherent cooperation and competition among intelligent UAVs, we further reformulate this problem as a combination of a series of strongly interconnected multi-agent stochastic games, and theoretically prove the existence of the corresponding Nash Equilibrium (NE). Then, we propose a novel reinforcement learning based UAV swarm dynamic coordination (RLDC) algorithm for obtaining such an equilibrium. Furthermore, the convergence and complexity of the RLDC algorithm are analyzed. Simulations are performed to evaluate the performance of RLDC and illustrate its superiority compared to existing approaches.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 1","pages":"255-270"},"PeriodicalIF":5.3000,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Green Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10587195/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we study the energy-efficient unmanned aerial vehicle (UAV) swarm assisted mobile edge computing (MEC) with dynamic clustering and scheduling. In the considered system model, UAVs are divided into multiple swarms, with each swarm consisting of a leader UAV and several follower UAVs. These UAVs serve as mobile edge servers, providing computing services to their covered ground end-users. Unlike existing works, we allow UAVs to dynamically cluster into different swarms, in other words, each follower UAV can change its leader based on the time-varying spatial positions, updated application placement, etc. in a dynamic manner. With the objective of maximizing the long-term energy efficiency of the UAV swarm assisted MEC system, a joint optimization problem of UAV swarm dynamic clustering and scheduling is formulated. Considering the inherent cooperation and competition among intelligent UAVs, we further reformulate this problem as a combination of a series of strongly interconnected multi-agent stochastic games, and theoretically prove the existence of the corresponding Nash Equilibrium (NE). Then, we propose a novel reinforcement learning based UAV swarm dynamic coordination (RLDC) algorithm for obtaining such an equilibrium. Furthermore, the convergence and complexity of the RLDC algorithm are analyzed. Simulations are performed to evaluate the performance of RLDC and illustrate its superiority compared to existing approaches.