{"title":"面向物联网边缘智能的车载计算能力网络:基于ma - ddpg的鲁棒任务卸载和资源分配","authors":"Yi Liu;Li Jiang;Chau Yuen;Yan Zhang","doi":"10.1109/JIOT.2025.3580736","DOIUrl":null,"url":null,"abstract":"The deep integration of Internet of Things (IoT) and vehicular networks demands ultrareliable, low-latency computing paradigms to support emerging applications like autonomous driving and smart traffic management. Existing mobile-edge computing (MEC) frameworks, however, struggle with dynamic resource heterogeneity, intermittent connectivity, and inefficient coordination among distributed nodes. To address these challenges, this article proposes vehicular computing power networks (VCPNs), an IoT-driven edge intelligence framework that orchestrates computational resources from mobile user equipments (MUEs), connected vehicles, and edge servers. We formulate a joint optimization problem to minimize end-to-end task latency by finding optimal task offloading decisions and resource allocation (e.g., CPU and bandwidth) policies under time-varying IoT channel conditions and node mobility. To enable decentralized coordination in IoT environment, we model the problem as a multiagent Markov decision process (MDP) and propose a multiagent deep deterministic policy gradient (MA-DDPG) algorithm in which agents (MUEs, vehicles, servers) collaboratively learn policies to optimize task scheduling and resource sharing. Furthermore, we design a robust MA-DDPG variant with error-resilient experience replay and channel-adaptive reward mechanisms to ensure reliable training under packet loss and unstable connectivity. Numerical results demonstrate that VCPN reduces average task latency and improves energy efficiency compared to federated MEC baselines. The proposed MA-DDPG algorithm achieves convergence stability in high-mobility scenarios, outperforming conventional deep reinforcement learning methods.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 18","pages":"36868-36879"},"PeriodicalIF":8.9000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Vehicular Computing Power Networks for IoT-Driven Edge Intelligence: MA-DDPG-Based Robust Task Offloading and Resource Allocation\",\"authors\":\"Yi Liu;Li Jiang;Chau Yuen;Yan Zhang\",\"doi\":\"10.1109/JIOT.2025.3580736\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The deep integration of Internet of Things (IoT) and vehicular networks demands ultrareliable, low-latency computing paradigms to support emerging applications like autonomous driving and smart traffic management. Existing mobile-edge computing (MEC) frameworks, however, struggle with dynamic resource heterogeneity, intermittent connectivity, and inefficient coordination among distributed nodes. To address these challenges, this article proposes vehicular computing power networks (VCPNs), an IoT-driven edge intelligence framework that orchestrates computational resources from mobile user equipments (MUEs), connected vehicles, and edge servers. We formulate a joint optimization problem to minimize end-to-end task latency by finding optimal task offloading decisions and resource allocation (e.g., CPU and bandwidth) policies under time-varying IoT channel conditions and node mobility. To enable decentralized coordination in IoT environment, we model the problem as a multiagent Markov decision process (MDP) and propose a multiagent deep deterministic policy gradient (MA-DDPG) algorithm in which agents (MUEs, vehicles, servers) collaboratively learn policies to optimize task scheduling and resource sharing. Furthermore, we design a robust MA-DDPG variant with error-resilient experience replay and channel-adaptive reward mechanisms to ensure reliable training under packet loss and unstable connectivity. Numerical results demonstrate that VCPN reduces average task latency and improves energy efficiency compared to federated MEC baselines. The proposed MA-DDPG algorithm achieves convergence stability in high-mobility scenarios, outperforming conventional deep reinforcement learning methods.\",\"PeriodicalId\":54347,\"journal\":{\"name\":\"IEEE Internet of Things Journal\",\"volume\":\"12 18\",\"pages\":\"36868-36879\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Internet of Things Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11039641/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11039641/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Vehicular Computing Power Networks for IoT-Driven Edge Intelligence: MA-DDPG-Based Robust Task Offloading and Resource Allocation
The deep integration of Internet of Things (IoT) and vehicular networks demands ultrareliable, low-latency computing paradigms to support emerging applications like autonomous driving and smart traffic management. Existing mobile-edge computing (MEC) frameworks, however, struggle with dynamic resource heterogeneity, intermittent connectivity, and inefficient coordination among distributed nodes. To address these challenges, this article proposes vehicular computing power networks (VCPNs), an IoT-driven edge intelligence framework that orchestrates computational resources from mobile user equipments (MUEs), connected vehicles, and edge servers. We formulate a joint optimization problem to minimize end-to-end task latency by finding optimal task offloading decisions and resource allocation (e.g., CPU and bandwidth) policies under time-varying IoT channel conditions and node mobility. To enable decentralized coordination in IoT environment, we model the problem as a multiagent Markov decision process (MDP) and propose a multiagent deep deterministic policy gradient (MA-DDPG) algorithm in which agents (MUEs, vehicles, servers) collaboratively learn policies to optimize task scheduling and resource sharing. Furthermore, we design a robust MA-DDPG variant with error-resilient experience replay and channel-adaptive reward mechanisms to ensure reliable training under packet loss and unstable connectivity. Numerical results demonstrate that VCPN reduces average task latency and improves energy efficiency compared to federated MEC baselines. The proposed MA-DDPG algorithm achieves convergence stability in high-mobility scenarios, outperforming conventional deep reinforcement learning methods.
期刊介绍:
The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.