{"title":"基于深度强化学习的移动车辆多跳任务卸载迁移与执行决策","authors":"Wenjie Zhou, Tian Zhang, Zekun Lu, Linbo Zhai","doi":"10.1016/j.vehcom.2025.100950","DOIUrl":null,"url":null,"abstract":"<div><div>As the Internet of Things (IoT) drives the development of Vehicular Edge Computing (VEC), there is a surge in computational demand from emerging in-vehicle applications. Most existing studies do not fully consider the frequent changes in network topology under high mobility of vehicles and the underutilization of idle resources by single-hop offloading. To this end, we propose a task offloading scheme for vehicular edge computing based on multi-hop offloading. The scheme allows task vehicles to offload tasks to service vehicles with excess idle resources outside the communication range, and adapts to dynamic changes in network topology by introducing the concept of neighboring vehicle connection time. This study aims to minimize the delayed energy consumption utility value of the task under the conditions of satisfying the maximum task delay limit, vehicle computational and storage resource constraints. In response to this NP-hard problem, a two-stage reinforcement learning strategy MOCDD (combining Deep Q Network (DQN) and Deep Deterministic Policy Gradient (DDPG)) is proposed to divide the mixed action space into pure discrete and pure continuous action space to determine task migration, executive decision and vehicle transmission power. Simulation results verify the effectiveness of the proposed scheme.</div></div>","PeriodicalId":54346,"journal":{"name":"Vehicular Communications","volume":"55 ","pages":"Article 100950"},"PeriodicalIF":5.8000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep reinforcement learning based migration and execution decisions for multi-hop task offloading in mobile vehicle edge computing\",\"authors\":\"Wenjie Zhou, Tian Zhang, Zekun Lu, Linbo Zhai\",\"doi\":\"10.1016/j.vehcom.2025.100950\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As the Internet of Things (IoT) drives the development of Vehicular Edge Computing (VEC), there is a surge in computational demand from emerging in-vehicle applications. Most existing studies do not fully consider the frequent changes in network topology under high mobility of vehicles and the underutilization of idle resources by single-hop offloading. To this end, we propose a task offloading scheme for vehicular edge computing based on multi-hop offloading. The scheme allows task vehicles to offload tasks to service vehicles with excess idle resources outside the communication range, and adapts to dynamic changes in network topology by introducing the concept of neighboring vehicle connection time. This study aims to minimize the delayed energy consumption utility value of the task under the conditions of satisfying the maximum task delay limit, vehicle computational and storage resource constraints. In response to this NP-hard problem, a two-stage reinforcement learning strategy MOCDD (combining Deep Q Network (DQN) and Deep Deterministic Policy Gradient (DDPG)) is proposed to divide the mixed action space into pure discrete and pure continuous action space to determine task migration, executive decision and vehicle transmission power. Simulation results verify the effectiveness of the proposed scheme.</div></div>\",\"PeriodicalId\":54346,\"journal\":{\"name\":\"Vehicular Communications\",\"volume\":\"55 \",\"pages\":\"Article 100950\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2025-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Vehicular Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214209625000774\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vehicular Communications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214209625000774","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
Deep reinforcement learning based migration and execution decisions for multi-hop task offloading in mobile vehicle edge computing
As the Internet of Things (IoT) drives the development of Vehicular Edge Computing (VEC), there is a surge in computational demand from emerging in-vehicle applications. Most existing studies do not fully consider the frequent changes in network topology under high mobility of vehicles and the underutilization of idle resources by single-hop offloading. To this end, we propose a task offloading scheme for vehicular edge computing based on multi-hop offloading. The scheme allows task vehicles to offload tasks to service vehicles with excess idle resources outside the communication range, and adapts to dynamic changes in network topology by introducing the concept of neighboring vehicle connection time. This study aims to minimize the delayed energy consumption utility value of the task under the conditions of satisfying the maximum task delay limit, vehicle computational and storage resource constraints. In response to this NP-hard problem, a two-stage reinforcement learning strategy MOCDD (combining Deep Q Network (DQN) and Deep Deterministic Policy Gradient (DDPG)) is proposed to divide the mixed action space into pure discrete and pure continuous action space to determine task migration, executive decision and vehicle transmission power. Simulation results verify the effectiveness of the proposed scheme.
期刊介绍:
Vehicular communications is a growing area of communications between vehicles and including roadside communication infrastructure. Advances in wireless communications are making possible sharing of information through real time communications between vehicles and infrastructure. This has led to applications to increase safety of vehicles and communication between passengers and the Internet. Standardization efforts on vehicular communication are also underway to make vehicular transportation safer, greener and easier.
The aim of the journal is to publish high quality peer–reviewed papers in the area of vehicular communications. The scope encompasses all types of communications involving vehicles, including vehicle–to–vehicle and vehicle–to–infrastructure. The scope includes (but not limited to) the following topics related to vehicular communications:
Vehicle to vehicle and vehicle to infrastructure communications
Channel modelling, modulating and coding
Congestion Control and scalability issues
Protocol design, testing and verification
Routing in vehicular networks
Security issues and countermeasures
Deployment and field testing
Reducing energy consumption and enhancing safety of vehicles
Wireless in–car networks
Data collection and dissemination methods
Mobility and handover issues
Safety and driver assistance applications
UAV
Underwater communications
Autonomous cooperative driving
Social networks
Internet of vehicles
Standardization of protocols.