{"title":"Trajectory Design and Generalization for UAV Enabled Networks:A Deep Reinforcement Learning Approach","authors":"Xuan Li, Qiang Wang, Jie Liu, Wenqi Zhang","doi":"10.1109/WCNC45663.2020.9120668","DOIUrl":null,"url":null,"abstract":"In this paper, an unmanned aerial vehicle (UAV) flies as a base station (BS) to provide wireless communication service. We propose two algorithms for designing the trajectory of the UAV and analyze the impact of different training approaches on transferring to new environments. When the UAV is used to track users that move along some specific paths, we propose a proximal policy optimization (PPO) -based algorithm to maximize the instantaneous sum rate (MSR-PPO). The UAV is modeled as a deep reinforcement learning (DRL) agent to learn how to move by interacting with the environment. When the UAV serves users along unknown paths for emergencies, we propose a random training proximal policy optimization (RT-PPO) algorithm which can transfer the pre-trained model to new tasks to achieve quick deployment. Unlike classical DRL algorithms that the agent is trained on the same task to learn its actions, RT-PPO randomizes the features of tasks to get the ability to transfer to new tasks. Numerical results reveal that MSR-PPO achieves a remarkable improvement and RT-PPO shows an effective generalization performance.","PeriodicalId":415064,"journal":{"name":"2020 IEEE Wireless Communications and Networking Conference (WCNC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Wireless Communications and Networking Conference (WCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCNC45663.2020.9120668","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
In this paper, an unmanned aerial vehicle (UAV) flies as a base station (BS) to provide wireless communication service. We propose two algorithms for designing the trajectory of the UAV and analyze the impact of different training approaches on transferring to new environments. When the UAV is used to track users that move along some specific paths, we propose a proximal policy optimization (PPO) -based algorithm to maximize the instantaneous sum rate (MSR-PPO). The UAV is modeled as a deep reinforcement learning (DRL) agent to learn how to move by interacting with the environment. When the UAV serves users along unknown paths for emergencies, we propose a random training proximal policy optimization (RT-PPO) algorithm which can transfer the pre-trained model to new tasks to achieve quick deployment. Unlike classical DRL algorithms that the agent is trained on the same task to learn its actions, RT-PPO randomizes the features of tasks to get the ability to transfer to new tasks. Numerical results reveal that MSR-PPO achieves a remarkable improvement and RT-PPO shows an effective generalization performance.