{"title":"Deep Reinforcement Learning for Energy-Efficient Task Offloading in Cooperative Vehicular Edge Networks","authors":"Paul Agbaje, E. Nwafor, Habeeb Olufowobi","doi":"10.1109/INDIN51400.2023.10218113","DOIUrl":null,"url":null,"abstract":"In the Internet of Vehicle ecosystem, multi-access edge computing (MEC) enables mobile nodes to improve their communication and computation capabilities by executing transactions in near real-time. However, the limited energy and computation capabilities of MEC servers limit the efficiency of task computation. Moreover, the use of static edge servers in dense vehicular networks may lead to an influx of service requests that negatively impact the quality of service (QoS) of the edge network. To enhance the QoS and optimize network resources, minimizing offloading computation costs in terms of reduced latency and energy consumption is crucial. In this paper, we propose a cooperative offloading scheme for vehicular nodes, using vehicles as mobile edge servers, which minimizes energy consumption and network delay. In addition, an optimization problem is presented, which is formulated as a Markov Decision Process (MDP). The solution proposed is a deep reinforcement-based Twin Delayed Deep Deterministic policy gradient (TD3), ensuring an optimal balance between task computation time delay and the energy consumption of the system.","PeriodicalId":174443,"journal":{"name":"2023 IEEE 21st International Conference on Industrial Informatics (INDIN)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 21st International Conference on Industrial Informatics (INDIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDIN51400.2023.10218113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the Internet of Vehicle ecosystem, multi-access edge computing (MEC) enables mobile nodes to improve their communication and computation capabilities by executing transactions in near real-time. However, the limited energy and computation capabilities of MEC servers limit the efficiency of task computation. Moreover, the use of static edge servers in dense vehicular networks may lead to an influx of service requests that negatively impact the quality of service (QoS) of the edge network. To enhance the QoS and optimize network resources, minimizing offloading computation costs in terms of reduced latency and energy consumption is crucial. In this paper, we propose a cooperative offloading scheme for vehicular nodes, using vehicles as mobile edge servers, which minimizes energy consumption and network delay. In addition, an optimization problem is presented, which is formulated as a Markov Decision Process (MDP). The solution proposed is a deep reinforcement-based Twin Delayed Deep Deterministic policy gradient (TD3), ensuring an optimal balance between task computation time delay and the energy consumption of the system.