{"title":"基于能量收集的无人机辅助车辆边缘计算:一种深度强化学习方法","authors":"Zhanpeng Zhang, Xinghuan Xie, Chen Xu, Runze Wu","doi":"10.1109/ICCCWorkshops55477.2022.9896720","DOIUrl":null,"url":null,"abstract":"Unmanned aerial vehicle (UAV) can provide communication and computation service enhancements to the In-ternet of vehicles (IoV) via flexible deployment and short-range transmission. In this paper, we investigate an energy harvesting-based UAV-assisted vehicular edge computing framework, where the UAV equipped with edge server helps to execute vehicular computing tasks, and meanwhile harvests energy from the base station and vehicles by wireless power transfer (WPT) and simultaneous wireless information and power transfer (SWIPT) techniques, respectively. Considering a long-term task offloading scenario, we aim to maximize the amount of data offloaded to the UAV for computation during the whole execution time by jointly optimizing computation resource allocation, power splitting and UAV speed. Moreover, since the formulated problem is a time-dimension coupled long-term optimization which is difficult to solve, we design a deep reinforcement learning (DRL) approach, the basis of which is the deep deterministic policy gradient (DDPG) algorithm, to obtain a learning result. Simulation results show that the proposed method achieves a higher amount of data offloaded to the UAV for computation compared to other benchmarks.","PeriodicalId":148869,"journal":{"name":"2022 IEEE/CIC International Conference on Communications in China (ICCC Workshops)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Energy Harvesting-Based UAV-Assisted Vehicular Edge Computing: A Deep Reinforcement Learning Approach\",\"authors\":\"Zhanpeng Zhang, Xinghuan Xie, Chen Xu, Runze Wu\",\"doi\":\"10.1109/ICCCWorkshops55477.2022.9896720\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unmanned aerial vehicle (UAV) can provide communication and computation service enhancements to the In-ternet of vehicles (IoV) via flexible deployment and short-range transmission. In this paper, we investigate an energy harvesting-based UAV-assisted vehicular edge computing framework, where the UAV equipped with edge server helps to execute vehicular computing tasks, and meanwhile harvests energy from the base station and vehicles by wireless power transfer (WPT) and simultaneous wireless information and power transfer (SWIPT) techniques, respectively. Considering a long-term task offloading scenario, we aim to maximize the amount of data offloaded to the UAV for computation during the whole execution time by jointly optimizing computation resource allocation, power splitting and UAV speed. Moreover, since the formulated problem is a time-dimension coupled long-term optimization which is difficult to solve, we design a deep reinforcement learning (DRL) approach, the basis of which is the deep deterministic policy gradient (DDPG) algorithm, to obtain a learning result. Simulation results show that the proposed method achieves a higher amount of data offloaded to the UAV for computation compared to other benchmarks.\",\"PeriodicalId\":148869,\"journal\":{\"name\":\"2022 IEEE/CIC International Conference on Communications in China (ICCC Workshops)\",\"volume\":\"7 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/CIC International Conference on Communications in China (ICCC Workshops)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCWorkshops55477.2022.9896720\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/CIC International Conference on Communications in China (ICCC Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCWorkshops55477.2022.9896720","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Energy Harvesting-Based UAV-Assisted Vehicular Edge Computing: A Deep Reinforcement Learning Approach
Unmanned aerial vehicle (UAV) can provide communication and computation service enhancements to the In-ternet of vehicles (IoV) via flexible deployment and short-range transmission. In this paper, we investigate an energy harvesting-based UAV-assisted vehicular edge computing framework, where the UAV equipped with edge server helps to execute vehicular computing tasks, and meanwhile harvests energy from the base station and vehicles by wireless power transfer (WPT) and simultaneous wireless information and power transfer (SWIPT) techniques, respectively. Considering a long-term task offloading scenario, we aim to maximize the amount of data offloaded to the UAV for computation during the whole execution time by jointly optimizing computation resource allocation, power splitting and UAV speed. Moreover, since the formulated problem is a time-dimension coupled long-term optimization which is difficult to solve, we design a deep reinforcement learning (DRL) approach, the basis of which is the deep deterministic policy gradient (DDPG) algorithm, to obtain a learning result. Simulation results show that the proposed method achieves a higher amount of data offloaded to the UAV for computation compared to other benchmarks.