{"title":"车辆边缘智能:基于drl的车辆- rsu -边缘协同网络任务推理资源编排","authors":"Wenhao Fan;Yang Yu;Chenhui Bao;Yuan’an Liu","doi":"10.1109/TMC.2025.3572296","DOIUrl":null,"url":null,"abstract":"Vehicular edge intelligence, distinct from traditional edge intelligence, exhibits unique characteristics, including the mobility of vehicles, uneven spatial and temporal distribution of vehicles, and variability in the AI models deployed on vehicles, Roadside Units (RSUs), and edge servers (ESs). In this paper, we propose a Deep Reinforcement Learning (DRL)-based resource orchestration scheme for task inference in vehicle-RSU-edge collaborative networks. In our approach, vehicles’ inference tasks can be processed on the vehicles, RSUs, or ESs, encompassing a total of 9 possible scenarios based on the cross-RSU mobility of vehicles. The scheme jointly optimizes task processing decision-making, transmission power allocation, computational resource allocation, and transmission rate allocation. The objective is to minimize the total cost, which involves a trade-off between task processing latency, energy consumption and inference error rate across all vehicle tasks. We design a DRL algorithm that decomposes the original optimization problem into sub-problems and efficiently solves them by combining the Softmax Deep Double Deterministic Policy Gradients (SD3) algorithm with multiple numerical methods. We analyzed the complexity and convergence of the algorithm. Specifically, we demonstrated its low complexity and fast, stable convergence, which prove its effectiveness in solving the problem. And we demonstrate the superiority of our scheme by comparing it with 5 benchmark schemes across 6 different scenarios.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"24 10","pages":"10927-10944"},"PeriodicalIF":9.2000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vehicular Edge Intelligence: DRL-Based Resource Orchestration for Task Inference in Vehicle-RSU-Edge Collaborative Networks\",\"authors\":\"Wenhao Fan;Yang Yu;Chenhui Bao;Yuan’an Liu\",\"doi\":\"10.1109/TMC.2025.3572296\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Vehicular edge intelligence, distinct from traditional edge intelligence, exhibits unique characteristics, including the mobility of vehicles, uneven spatial and temporal distribution of vehicles, and variability in the AI models deployed on vehicles, Roadside Units (RSUs), and edge servers (ESs). In this paper, we propose a Deep Reinforcement Learning (DRL)-based resource orchestration scheme for task inference in vehicle-RSU-edge collaborative networks. In our approach, vehicles’ inference tasks can be processed on the vehicles, RSUs, or ESs, encompassing a total of 9 possible scenarios based on the cross-RSU mobility of vehicles. The scheme jointly optimizes task processing decision-making, transmission power allocation, computational resource allocation, and transmission rate allocation. The objective is to minimize the total cost, which involves a trade-off between task processing latency, energy consumption and inference error rate across all vehicle tasks. We design a DRL algorithm that decomposes the original optimization problem into sub-problems and efficiently solves them by combining the Softmax Deep Double Deterministic Policy Gradients (SD3) algorithm with multiple numerical methods. We analyzed the complexity and convergence of the algorithm. Specifically, we demonstrated its low complexity and fast, stable convergence, which prove its effectiveness in solving the problem. And we demonstrate the superiority of our scheme by comparing it with 5 benchmark schemes across 6 different scenarios.\",\"PeriodicalId\":50389,\"journal\":{\"name\":\"IEEE Transactions on Mobile Computing\",\"volume\":\"24 10\",\"pages\":\"10927-10944\"},\"PeriodicalIF\":9.2000,\"publicationDate\":\"2025-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Mobile Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11008830/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11008830/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
与传统的边缘智能不同,车辆边缘智能具有独特的特征,包括车辆的移动性、车辆的时空分布不均匀,以及部署在车辆、路边单元(rsu)和边缘服务器(ESs)上的人工智能模型的可变性。在本文中,我们提出了一种基于深度强化学习(DRL)的资源编排方案,用于车辆- rsu边缘协作网络中的任务推理。在我们的方法中,车辆的推理任务可以在车辆、rsu或ESs上进行处理,基于车辆跨rsu的移动性,总共包含9种可能的场景。该方案对任务处理决策、传输功率分配、计算资源分配和传输速率分配进行了联合优化。目标是最小化总成本,这涉及到在所有车辆任务的任务处理延迟、能量消耗和推理错误率之间的权衡。我们设计了一种DRL算法,将原优化问题分解为子问题,并将Softmax Deep Double Deterministic Policy Gradients (SD3)算法与多种数值方法相结合,有效地求解子问题。分析了算法的复杂度和收敛性。具体来说,我们证明了它的低复杂度和快速、稳定的收敛性,证明了它在解决问题方面的有效性。通过将该方案与6种不同场景下的5种基准方案进行比较,证明了该方案的优越性。
Vehicular Edge Intelligence: DRL-Based Resource Orchestration for Task Inference in Vehicle-RSU-Edge Collaborative Networks
Vehicular edge intelligence, distinct from traditional edge intelligence, exhibits unique characteristics, including the mobility of vehicles, uneven spatial and temporal distribution of vehicles, and variability in the AI models deployed on vehicles, Roadside Units (RSUs), and edge servers (ESs). In this paper, we propose a Deep Reinforcement Learning (DRL)-based resource orchestration scheme for task inference in vehicle-RSU-edge collaborative networks. In our approach, vehicles’ inference tasks can be processed on the vehicles, RSUs, or ESs, encompassing a total of 9 possible scenarios based on the cross-RSU mobility of vehicles. The scheme jointly optimizes task processing decision-making, transmission power allocation, computational resource allocation, and transmission rate allocation. The objective is to minimize the total cost, which involves a trade-off between task processing latency, energy consumption and inference error rate across all vehicle tasks. We design a DRL algorithm that decomposes the original optimization problem into sub-problems and efficiently solves them by combining the Softmax Deep Double Deterministic Policy Gradients (SD3) algorithm with multiple numerical methods. We analyzed the complexity and convergence of the algorithm. Specifically, we demonstrated its low complexity and fast, stable convergence, which prove its effectiveness in solving the problem. And we demonstrate the superiority of our scheme by comparing it with 5 benchmark schemes across 6 different scenarios.
期刊介绍:
IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.