{"title":"基于 DRL 的分散计算中的车联网任务和计算卸载","authors":"Ziyang Zhang, Keyu Gu, Zijie Xu","doi":"10.1007/s10723-023-09729-z","DOIUrl":null,"url":null,"abstract":"<p>This paper focuses on the problem of computation offloading in a high-mobility Internet of Vehicles (IoVs) environment. The goal is to address the challenges related to latency, energy consumption, and payment cost requirements. The approach considers both moving and parked vehicles as fog nodes, which can assist in offloading computational tasks. However, as the number of vehicles increases, the action space for each agent grows exponentially, posing a challenge for decentralised decision-making. The dynamic nature of vehicular mobility further complicates the network dynamics, requiring joint cooperative behaviour from the learning agents to achieve convergence. The traditional deep reinforcement learning (DRL) approach for offloading in IoVs treats each agent as an independent learner. It ignores the actions of other agents during the training process. This paper utilises a cooperative three-layer decentralised architecture called Vehicle-Assisted Multi-Access Edge Computing (VMEC) to overcome this limitation. The VMEC network consists of three layers: the fog, cloudlet, and cloud layers. In the fog layer, vehicles within associated Roadside Units (RSUs) and neighbouring RSUs participate as fog nodes. The middle layer comprises Mobile Edge Computing (MEC) servers, while the top layer represents the cloud infrastructure. To address the dynamic task offloading problem in VMEC, the paper proposes using a Decentralized Framework of Task and Computational Offloading (DFTCO), which utilises the strength of MADRL and NOMA techniques. This approach considers multiple agents making offloading decisions simultaneously and aims to find the optimal matching between tasks and available resources.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DRL-based Task and Computational Offloading for Internet of Vehicles in Decentralized Computing\",\"authors\":\"Ziyang Zhang, Keyu Gu, Zijie Xu\",\"doi\":\"10.1007/s10723-023-09729-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This paper focuses on the problem of computation offloading in a high-mobility Internet of Vehicles (IoVs) environment. The goal is to address the challenges related to latency, energy consumption, and payment cost requirements. The approach considers both moving and parked vehicles as fog nodes, which can assist in offloading computational tasks. However, as the number of vehicles increases, the action space for each agent grows exponentially, posing a challenge for decentralised decision-making. The dynamic nature of vehicular mobility further complicates the network dynamics, requiring joint cooperative behaviour from the learning agents to achieve convergence. The traditional deep reinforcement learning (DRL) approach for offloading in IoVs treats each agent as an independent learner. It ignores the actions of other agents during the training process. This paper utilises a cooperative three-layer decentralised architecture called Vehicle-Assisted Multi-Access Edge Computing (VMEC) to overcome this limitation. The VMEC network consists of three layers: the fog, cloudlet, and cloud layers. In the fog layer, vehicles within associated Roadside Units (RSUs) and neighbouring RSUs participate as fog nodes. The middle layer comprises Mobile Edge Computing (MEC) servers, while the top layer represents the cloud infrastructure. To address the dynamic task offloading problem in VMEC, the paper proposes using a Decentralized Framework of Task and Computational Offloading (DFTCO), which utilises the strength of MADRL and NOMA techniques. This approach considers multiple agents making offloading decisions simultaneously and aims to find the optimal matching between tasks and available resources.</p>\",\"PeriodicalId\":3,\"journal\":{\"name\":\"ACS Applied Electronic Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-01-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Electronic Materials\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10723-023-09729-z\",\"RegionNum\":3,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10723-023-09729-z","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
DRL-based Task and Computational Offloading for Internet of Vehicles in Decentralized Computing
This paper focuses on the problem of computation offloading in a high-mobility Internet of Vehicles (IoVs) environment. The goal is to address the challenges related to latency, energy consumption, and payment cost requirements. The approach considers both moving and parked vehicles as fog nodes, which can assist in offloading computational tasks. However, as the number of vehicles increases, the action space for each agent grows exponentially, posing a challenge for decentralised decision-making. The dynamic nature of vehicular mobility further complicates the network dynamics, requiring joint cooperative behaviour from the learning agents to achieve convergence. The traditional deep reinforcement learning (DRL) approach for offloading in IoVs treats each agent as an independent learner. It ignores the actions of other agents during the training process. This paper utilises a cooperative three-layer decentralised architecture called Vehicle-Assisted Multi-Access Edge Computing (VMEC) to overcome this limitation. The VMEC network consists of three layers: the fog, cloudlet, and cloud layers. In the fog layer, vehicles within associated Roadside Units (RSUs) and neighbouring RSUs participate as fog nodes. The middle layer comprises Mobile Edge Computing (MEC) servers, while the top layer represents the cloud infrastructure. To address the dynamic task offloading problem in VMEC, the paper proposes using a Decentralized Framework of Task and Computational Offloading (DFTCO), which utilises the strength of MADRL and NOMA techniques. This approach considers multiple agents making offloading decisions simultaneously and aims to find the optimal matching between tasks and available resources.