Mingxuan Huang, Kaixuan Sun, Yunpeng Hou, Zhicheng Ye, Yuanlong Wan, Huasen He
{"title":"基于深度强化学习的无人机辅助边缘计算延迟感知任务卸载","authors":"Mingxuan Huang, Kaixuan Sun, Yunpeng Hou, Zhicheng Ye, Yuanlong Wan, Huasen He","doi":"10.1145/3579654.3579736","DOIUrl":null,"url":null,"abstract":"Multi-access edge computing has been widely used in various Internet of Things (IoT) devices because of its excellent computing and fast interaction abilities. How to improve the service extensibility of edge computing and optimize the computing offload strategy has become the key to improve the quality of service to edge computing users. However, the traditional offloading strategy based on mathematical programming has exposed its inherent limitations in dynamic scenarios, and cannot meet the requirements of multiple-mobile terminals distributed in a large area. Therefore, this paper use Unmanned Aerial Vehicles (UAVs) to establish a multi-UAV-assisted edge computing framework for extending the service range, and proposes an offloading strategy based on reinforcement learning to offload the growing computing requirements from mobile terminals to edge servers. By mapping the states of mobile terminals and UAVs to the corresponding action space, and then offloading computing tasks to UAVs, the energy consumption caused by computing and processing tasks of mobile terminals can be effectively reduced. Jointly considering the potential dimensional disaster of state space and the convergence failure imposed by the increase of device numbers, a novel computation offloading strategy based on deep reinforcement learning is proposed. Moreover, we design a load-balancing mechanism in the UAVs to improve the processing capacity. Experimental results prove that our proposed algorithm can effectively reduce the computing energy consumption of mobile terminals and avoid task timeout with a short convergence time.","PeriodicalId":146783,"journal":{"name":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep reinforcement learning based delay-aware task offloading for UAV-assisted edge computing\",\"authors\":\"Mingxuan Huang, Kaixuan Sun, Yunpeng Hou, Zhicheng Ye, Yuanlong Wan, Huasen He\",\"doi\":\"10.1145/3579654.3579736\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-access edge computing has been widely used in various Internet of Things (IoT) devices because of its excellent computing and fast interaction abilities. How to improve the service extensibility of edge computing and optimize the computing offload strategy has become the key to improve the quality of service to edge computing users. However, the traditional offloading strategy based on mathematical programming has exposed its inherent limitations in dynamic scenarios, and cannot meet the requirements of multiple-mobile terminals distributed in a large area. Therefore, this paper use Unmanned Aerial Vehicles (UAVs) to establish a multi-UAV-assisted edge computing framework for extending the service range, and proposes an offloading strategy based on reinforcement learning to offload the growing computing requirements from mobile terminals to edge servers. By mapping the states of mobile terminals and UAVs to the corresponding action space, and then offloading computing tasks to UAVs, the energy consumption caused by computing and processing tasks of mobile terminals can be effectively reduced. Jointly considering the potential dimensional disaster of state space and the convergence failure imposed by the increase of device numbers, a novel computation offloading strategy based on deep reinforcement learning is proposed. Moreover, we design a load-balancing mechanism in the UAVs to improve the processing capacity. Experimental results prove that our proposed algorithm can effectively reduce the computing energy consumption of mobile terminals and avoid task timeout with a short convergence time.\",\"PeriodicalId\":146783,\"journal\":{\"name\":\"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3579654.3579736\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 5th International Conference on Algorithms, Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579654.3579736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep reinforcement learning based delay-aware task offloading for UAV-assisted edge computing
Multi-access edge computing has been widely used in various Internet of Things (IoT) devices because of its excellent computing and fast interaction abilities. How to improve the service extensibility of edge computing and optimize the computing offload strategy has become the key to improve the quality of service to edge computing users. However, the traditional offloading strategy based on mathematical programming has exposed its inherent limitations in dynamic scenarios, and cannot meet the requirements of multiple-mobile terminals distributed in a large area. Therefore, this paper use Unmanned Aerial Vehicles (UAVs) to establish a multi-UAV-assisted edge computing framework for extending the service range, and proposes an offloading strategy based on reinforcement learning to offload the growing computing requirements from mobile terminals to edge servers. By mapping the states of mobile terminals and UAVs to the corresponding action space, and then offloading computing tasks to UAVs, the energy consumption caused by computing and processing tasks of mobile terminals can be effectively reduced. Jointly considering the potential dimensional disaster of state space and the convergence failure imposed by the increase of device numbers, a novel computation offloading strategy based on deep reinforcement learning is proposed. Moreover, we design a load-balancing mechanism in the UAVs to improve the processing capacity. Experimental results prove that our proposed algorithm can effectively reduce the computing energy consumption of mobile terminals and avoid task timeout with a short convergence time.