{"title":"Distributed RL-Based Resource Allocation and Task Offloading for Vehicular Edge of Things Computing","authors":"Ghada Afifi;Bassem Mokhtar","doi":"10.1109/OJVT.2025.3582035","DOIUrl":null,"url":null,"abstract":"Smart vehicles are increasingly equipped with advanced sensors and computational resources which enable them to detect surroundings and enhance driving safety. VEoTC (Vehicular Edge of Things Computing) solutions aim to exploit these embedded sensors and resources to provide computational services to other users. VEoTC can enhance the Quality of Experience (QoE) of vehicle and mobile users requesting computational tasks by providing context-aware services closer to the users that are otherwise not easily accessible in real time. Additionally, such solutions can extend the computational coverage to areas lacking Roadside Unit (RSU) infrastructure. However, VEoTC frameworks face several challenges in effectively localizing and allocating the distributed resources and offloading tasks successfully due to the high mobility of vehicles and fluctuating user densities. The paper proposes a distributed Machine Learning (ML)-based solution which optimizes task scheduling to smart vehicles and/or RSUs through joint resource allocation and task offloading. We formulate a belief-based optimization problem to maximize the QoE of vehicular users while providing performance guarantees that account for geospatial uncertainty associated with the availability of embedded resources. We propose a Deep Reinforcement Learning (DRL)-based solution to solve the formulated problem in real-time adapting to the dynamic network conditions. We analyze the performance of the proposed approach as compared to benchmark optimization and other ML-based techniques. Furthermore, we conduct hardware-based field test experiments to verify the effectiveness of our proposed algorithm to satisfy the stringent real-time latency requirements for various vehicular applications. According to our extensive simulation and experimental results, the proposed solution has the potential to satisfy the stringent QoE guarantees required for critical road safety applications.","PeriodicalId":34270,"journal":{"name":"IEEE Open Journal of Vehicular Technology","volume":"6 ","pages":"1796-1814"},"PeriodicalIF":5.3000,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11045983","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Vehicular Technology","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11045983/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Smart vehicles are increasingly equipped with advanced sensors and computational resources which enable them to detect surroundings and enhance driving safety. VEoTC (Vehicular Edge of Things Computing) solutions aim to exploit these embedded sensors and resources to provide computational services to other users. VEoTC can enhance the Quality of Experience (QoE) of vehicle and mobile users requesting computational tasks by providing context-aware services closer to the users that are otherwise not easily accessible in real time. Additionally, such solutions can extend the computational coverage to areas lacking Roadside Unit (RSU) infrastructure. However, VEoTC frameworks face several challenges in effectively localizing and allocating the distributed resources and offloading tasks successfully due to the high mobility of vehicles and fluctuating user densities. The paper proposes a distributed Machine Learning (ML)-based solution which optimizes task scheduling to smart vehicles and/or RSUs through joint resource allocation and task offloading. We formulate a belief-based optimization problem to maximize the QoE of vehicular users while providing performance guarantees that account for geospatial uncertainty associated with the availability of embedded resources. We propose a Deep Reinforcement Learning (DRL)-based solution to solve the formulated problem in real-time adapting to the dynamic network conditions. We analyze the performance of the proposed approach as compared to benchmark optimization and other ML-based techniques. Furthermore, we conduct hardware-based field test experiments to verify the effectiveness of our proposed algorithm to satisfy the stringent real-time latency requirements for various vehicular applications. According to our extensive simulation and experimental results, the proposed solution has the potential to satisfy the stringent QoE guarantees required for critical road safety applications.