Feng Chuan, Zhang Xu, Pengchao Han, Tianchun Ma, Xiaoxue Gong
{"title":"Policy network-based dual-agent deep reinforcement learning for multi-resource task offloading in multi-access edge cloud networks","authors":"Feng Chuan, Zhang Xu, Pengchao Han, Tianchun Ma, Xiaoxue Gong","doi":"10.23919/JCC.fa.2023-0383.202404","DOIUrl":null,"url":null,"abstract":"The Multi-access Edge Cloud (MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources, and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G. However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multi-resource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue, we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features (NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.","PeriodicalId":504777,"journal":{"name":"China Communications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"China Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/JCC.fa.2023-0383.202404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The Multi-access Edge Cloud (MEC) networks extend cloud computing services and capabilities to the edge of the networks. By bringing computation and storage capabilities closer to end-users and connected devices, MEC networks can support a wide range of applications. MEC networks can also leverage various types of resources, including computation resources, network resources, radio resources, and location-based resources, to provide multidimensional resources for intelligent applications in 5/6G. However, tasks generated by users often consist of multiple subtasks that require different types of resources. It is a challenging problem to offload multi-resource task requests to the edge cloud aiming at maximizing benefits due to the heterogeneity of resources provided by devices. To address this issue, we mathematically model the task requests with multiple subtasks. Then, the problem of task offloading of multi-resource task requests is proved to be NP-hard. Furthermore, we propose a novel Dual-Agent Deep Reinforcement Learning algorithm with Node First and Link features (NF_L_DA_DRL) based on the policy network, to optimize the benefits generated by offloading multi-resource task requests in MEC networks. Finally, simulation results show that the proposed algorithm can effectively improve the benefit of task offloading with higher resource utilization compared with baseline algorithms.