Rui Wei, Tuanfa Qin, Jinbao Huang, Ying Yang, Junyu Ren, Lei Yang
{"title":"基于 DRL 的 6G 车联网中任务迁移和卸载的资源分配调度方案","authors":"Rui Wei, Tuanfa Qin, Jinbao Huang, Ying Yang, Junyu Ren, Lei Yang","doi":"10.1049/cmu2.12826","DOIUrl":null,"url":null,"abstract":"<p>As vehicular technology advances, intelligent vehicles generate numerous computation-intensive tasks, challenging the computational resources of both the vehicles and the Internet of Vehicles (IoV). Traditional IoV struggles with fixed network structures and limited scalability, unable to meet the growing computational demands and next-generation mobile communication technologies. In congested areas, near-end Mobile Edge Computing (MEC) resources are often overtaxed, while far-end MEC servers are underused, resulting in poor service quality. A novel network framework utilizing sixth-generation mobile communication (6G) and digital twin technologies, combined with task migration, promises to alleviate these inefficiencies. To address these challenges, a task migration and re-offloading model based on task attribute classification is introduced, employing a hybrid deep reinforcement learning (DRL) algorithm—Dueling Double Q Network DDPG (QDPG). This algorithm merges the strengths of the Deep Deterministic Policy Gradient (DDPG) and the Dueling Double Deep Q-Network (D3QN), effectively handling continuous and discrete action domains to optimize task migration and re-offloading in IoV. The inclusion of the Mini Batch K-Means algorithm enhances learning efficiency and optimization in the DRL algorithm. Experimental results show that QDPG significantly boosts task efficiency and computational performance, providing a robust solution for resource allocation in IoV.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"18 18","pages":"1244-1265"},"PeriodicalIF":1.5000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12826","citationCount":"0","resultStr":"{\"title\":\"Resource allocation scheduling scheme for task migration and offloading in 6G Cybertwin internet of vehicles based on DRL\",\"authors\":\"Rui Wei, Tuanfa Qin, Jinbao Huang, Ying Yang, Junyu Ren, Lei Yang\",\"doi\":\"10.1049/cmu2.12826\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>As vehicular technology advances, intelligent vehicles generate numerous computation-intensive tasks, challenging the computational resources of both the vehicles and the Internet of Vehicles (IoV). Traditional IoV struggles with fixed network structures and limited scalability, unable to meet the growing computational demands and next-generation mobile communication technologies. In congested areas, near-end Mobile Edge Computing (MEC) resources are often overtaxed, while far-end MEC servers are underused, resulting in poor service quality. A novel network framework utilizing sixth-generation mobile communication (6G) and digital twin technologies, combined with task migration, promises to alleviate these inefficiencies. To address these challenges, a task migration and re-offloading model based on task attribute classification is introduced, employing a hybrid deep reinforcement learning (DRL) algorithm—Dueling Double Q Network DDPG (QDPG). This algorithm merges the strengths of the Deep Deterministic Policy Gradient (DDPG) and the Dueling Double Deep Q-Network (D3QN), effectively handling continuous and discrete action domains to optimize task migration and re-offloading in IoV. The inclusion of the Mini Batch K-Means algorithm enhances learning efficiency and optimization in the DRL algorithm. Experimental results show that QDPG significantly boosts task efficiency and computational performance, providing a robust solution for resource allocation in IoV.</p>\",\"PeriodicalId\":55001,\"journal\":{\"name\":\"IET Communications\",\"volume\":\"18 18\",\"pages\":\"1244-1265\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12826\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/cmu2.12826\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Communications","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cmu2.12826","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Resource allocation scheduling scheme for task migration and offloading in 6G Cybertwin internet of vehicles based on DRL
As vehicular technology advances, intelligent vehicles generate numerous computation-intensive tasks, challenging the computational resources of both the vehicles and the Internet of Vehicles (IoV). Traditional IoV struggles with fixed network structures and limited scalability, unable to meet the growing computational demands and next-generation mobile communication technologies. In congested areas, near-end Mobile Edge Computing (MEC) resources are often overtaxed, while far-end MEC servers are underused, resulting in poor service quality. A novel network framework utilizing sixth-generation mobile communication (6G) and digital twin technologies, combined with task migration, promises to alleviate these inefficiencies. To address these challenges, a task migration and re-offloading model based on task attribute classification is introduced, employing a hybrid deep reinforcement learning (DRL) algorithm—Dueling Double Q Network DDPG (QDPG). This algorithm merges the strengths of the Deep Deterministic Policy Gradient (DDPG) and the Dueling Double Deep Q-Network (D3QN), effectively handling continuous and discrete action domains to optimize task migration and re-offloading in IoV. The inclusion of the Mini Batch K-Means algorithm enhances learning efficiency and optimization in the DRL algorithm. Experimental results show that QDPG significantly boosts task efficiency and computational performance, providing a robust solution for resource allocation in IoV.
期刊介绍:
IET Communications covers the fundamental and generic research for a better understanding of communication technologies to harness the signals for better performing communication systems using various wired and/or wireless media. This Journal is particularly interested in research papers reporting novel solutions to the dominating problems of noise, interference, timing and errors for reduction systems deficiencies such as wasting scarce resources such as spectra, energy and bandwidth.
Topics include, but are not limited to:
Coding and Communication Theory;
Modulation and Signal Design;
Wired, Wireless and Optical Communication;
Communication System
Special Issues. Current Call for Papers:
Cognitive and AI-enabled Wireless and Mobile - https://digital-library.theiet.org/files/IET_COM_CFP_CAWM.pdf
UAV-Enabled Mobile Edge Computing - https://digital-library.theiet.org/files/IET_COM_CFP_UAV.pdf