Liang Guo , Chen-Khong Tham , Jie Jia , Jian Chen , Xingwei Wang
{"title":"基于生成扩散模型的vc系统联合任务卸载与资源分配QMIX","authors":"Liang Guo , Chen-Khong Tham , Jie Jia , Jian Chen , Xingwei Wang","doi":"10.1016/j.comnet.2025.111516","DOIUrl":null,"url":null,"abstract":"<div><div>To cope with the high computing demand and latency requirements of emerging vehicular applications, vehicular edge computing (VEC) has been regarded as a promising computing paradigm that improves vehicular performance by introducing edge computation offloading for resource-constrained vehicles. Compared to the conventional delay metric, information freshness is more crucial for applications, such as automatic driving, auto navigation, etc., which can effectively avoid potential accidents caused by outdated data. Therefore, we apply the age of information (AoI) to measure the freshness of all vehicles’ tasks. Then, a long-term average AoI minimization problem is formulated by jointly optimizing the edge-cloud cooperation task offloading and resource allocation under time-varying environments. To solve this problem, we propose an optimization-oriented multi-agent deep reinforcement learning (MADRL) framework. Specifically, we propose a generative diffusion model (GDM)-based value function decomposition MADRL algorithm, named GDM-QMIX, to learn power allocation and offloading policies for multiple vehicle agents. Meanwhile, the closed-form solution of the wired transmission rate and computing resources allocation is derived based on Karush-Kuhn–Tucker (KKT) conditions to evaluate the quality of actions of GDM-QMIX, thereby avoiding a huge action space and achieving joint optimization. Simulation results demonstrate the effectiveness of the proposed algorithm in solving the dynamic task offloading and resource allocation problem and the superiority of the proposed algorithm over the benchmark schemes in terms of the average AoI.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"270 ","pages":"Article 111516"},"PeriodicalIF":4.4000,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generative diffusion model-based QMIX for joint task offloading and resource allocation in VEC systems\",\"authors\":\"Liang Guo , Chen-Khong Tham , Jie Jia , Jian Chen , Xingwei Wang\",\"doi\":\"10.1016/j.comnet.2025.111516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To cope with the high computing demand and latency requirements of emerging vehicular applications, vehicular edge computing (VEC) has been regarded as a promising computing paradigm that improves vehicular performance by introducing edge computation offloading for resource-constrained vehicles. Compared to the conventional delay metric, information freshness is more crucial for applications, such as automatic driving, auto navigation, etc., which can effectively avoid potential accidents caused by outdated data. Therefore, we apply the age of information (AoI) to measure the freshness of all vehicles’ tasks. Then, a long-term average AoI minimization problem is formulated by jointly optimizing the edge-cloud cooperation task offloading and resource allocation under time-varying environments. To solve this problem, we propose an optimization-oriented multi-agent deep reinforcement learning (MADRL) framework. Specifically, we propose a generative diffusion model (GDM)-based value function decomposition MADRL algorithm, named GDM-QMIX, to learn power allocation and offloading policies for multiple vehicle agents. Meanwhile, the closed-form solution of the wired transmission rate and computing resources allocation is derived based on Karush-Kuhn–Tucker (KKT) conditions to evaluate the quality of actions of GDM-QMIX, thereby avoiding a huge action space and achieving joint optimization. Simulation results demonstrate the effectiveness of the proposed algorithm in solving the dynamic task offloading and resource allocation problem and the superiority of the proposed algorithm over the benchmark schemes in terms of the average AoI.</div></div>\",\"PeriodicalId\":50637,\"journal\":{\"name\":\"Computer Networks\",\"volume\":\"270 \",\"pages\":\"Article 111516\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2025-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389128625004839\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625004839","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
摘要
为了应对新兴车辆应用的高计算需求和延迟需求,车辆边缘计算(VEC)被认为是一种很有前途的计算范式,它通过为资源受限的车辆引入边缘计算卸载来提高车辆性能。与传统的延迟度量相比,信息新鲜度对于自动驾驶、自动导航等应用更为关键,可以有效避免数据过时带来的潜在事故。因此,我们采用信息年龄(age of information, AoI)来度量所有车辆任务的新鲜度。然后,通过共同优化时变环境下的边缘云合作任务卸载和资源分配,构建长期平均AoI最小化问题。为了解决这个问题,我们提出了一个面向优化的多智能体深度强化学习(MADRL)框架。具体而言,我们提出了一种基于生成扩散模型(GDM)的价值函数分解MADRL算法,命名为GDM- qmix,用于学习多个车辆智能体的功率分配和卸载策略。同时,基于KKT (Karush-Kuhn-Tucker)条件导出有线传输速率和计算资源分配的封闭解,以评估ggm - qmix的动作质量,避免了巨大的动作空间,实现了联合优化。仿真结果表明,该算法在解决动态任务卸载和资源分配问题方面是有效的,并且在平均AoI方面优于基准方案。
Generative diffusion model-based QMIX for joint task offloading and resource allocation in VEC systems
To cope with the high computing demand and latency requirements of emerging vehicular applications, vehicular edge computing (VEC) has been regarded as a promising computing paradigm that improves vehicular performance by introducing edge computation offloading for resource-constrained vehicles. Compared to the conventional delay metric, information freshness is more crucial for applications, such as automatic driving, auto navigation, etc., which can effectively avoid potential accidents caused by outdated data. Therefore, we apply the age of information (AoI) to measure the freshness of all vehicles’ tasks. Then, a long-term average AoI minimization problem is formulated by jointly optimizing the edge-cloud cooperation task offloading and resource allocation under time-varying environments. To solve this problem, we propose an optimization-oriented multi-agent deep reinforcement learning (MADRL) framework. Specifically, we propose a generative diffusion model (GDM)-based value function decomposition MADRL algorithm, named GDM-QMIX, to learn power allocation and offloading policies for multiple vehicle agents. Meanwhile, the closed-form solution of the wired transmission rate and computing resources allocation is derived based on Karush-Kuhn–Tucker (KKT) conditions to evaluate the quality of actions of GDM-QMIX, thereby avoiding a huge action space and achieving joint optimization. Simulation results demonstrate the effectiveness of the proposed algorithm in solving the dynamic task offloading and resource allocation problem and the superiority of the proposed algorithm over the benchmark schemes in terms of the average AoI.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.