{"title":"基于深度强化学习的物联网车依赖感知在线任务卸载","authors":"Chunhong Liu, Huaichen Wang, Mengdi Zhao, Jialei Liu, Xiaoyan Zhao, Peiyan Yuan","doi":"10.1186/s13677-024-00701-0","DOIUrl":null,"url":null,"abstract":"The convergence of artificial intelligence and in-vehicle wireless communication technologies, promises to fulfill the pressing communication needs of the Internet of Vehicles (IoV) while promoting the development of vehicle applications. However, making real-time dependency-aware task offloading decisions is difficult due to the high mobility of vehicles and the dynamic nature of the network environment. This leads to additional application computation time and energy consumption, increasing the risk of offloading failures for computationally intensive and latency-sensitive applications. In this paper, an offloading strategy for vehicle applications that jointly considers latency and energy consumption in the base station cooperative computing model is proposed. Firstly, we establish a collaborative offloading model involving multiple vehicles, multiple base stations, and multiple edge servers. Transferring vehicular applications to the application queue of edge servers and prioritizing them based on their completion deadlines. Secondly, each vehicular application is modeled as a directed acyclic graph (DAG) task with data dependency relationships. Subsequently, we propose a task offloading method based on task dependency awareness in deep reinforcement learning (DAG-DQN). Tasks are assigned to edge servers at different base stations, and edge servers collaborate to process tasks, minimizing vehicle application completion time and reducing edge server energy consumption. Finally, simulation results show that compared with the heuristic method, our proposed DAG-DQN method reduces task completion time by 16%, reduces system energy consumption by 19%, and improves decision-making efficiency by 70%.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dependency-aware online task offloading based on deep reinforcement learning for IoV\",\"authors\":\"Chunhong Liu, Huaichen Wang, Mengdi Zhao, Jialei Liu, Xiaoyan Zhao, Peiyan Yuan\",\"doi\":\"10.1186/s13677-024-00701-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The convergence of artificial intelligence and in-vehicle wireless communication technologies, promises to fulfill the pressing communication needs of the Internet of Vehicles (IoV) while promoting the development of vehicle applications. However, making real-time dependency-aware task offloading decisions is difficult due to the high mobility of vehicles and the dynamic nature of the network environment. This leads to additional application computation time and energy consumption, increasing the risk of offloading failures for computationally intensive and latency-sensitive applications. In this paper, an offloading strategy for vehicle applications that jointly considers latency and energy consumption in the base station cooperative computing model is proposed. Firstly, we establish a collaborative offloading model involving multiple vehicles, multiple base stations, and multiple edge servers. Transferring vehicular applications to the application queue of edge servers and prioritizing them based on their completion deadlines. Secondly, each vehicular application is modeled as a directed acyclic graph (DAG) task with data dependency relationships. Subsequently, we propose a task offloading method based on task dependency awareness in deep reinforcement learning (DAG-DQN). Tasks are assigned to edge servers at different base stations, and edge servers collaborate to process tasks, minimizing vehicle application completion time and reducing edge server energy consumption. Finally, simulation results show that compared with the heuristic method, our proposed DAG-DQN method reduces task completion time by 16%, reduces system energy consumption by 19%, and improves decision-making efficiency by 70%.\",\"PeriodicalId\":501257,\"journal\":{\"name\":\"Journal of Cloud Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Cloud Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s13677-024-00701-0\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cloud Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s13677-024-00701-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Dependency-aware online task offloading based on deep reinforcement learning for IoV
The convergence of artificial intelligence and in-vehicle wireless communication technologies, promises to fulfill the pressing communication needs of the Internet of Vehicles (IoV) while promoting the development of vehicle applications. However, making real-time dependency-aware task offloading decisions is difficult due to the high mobility of vehicles and the dynamic nature of the network environment. This leads to additional application computation time and energy consumption, increasing the risk of offloading failures for computationally intensive and latency-sensitive applications. In this paper, an offloading strategy for vehicle applications that jointly considers latency and energy consumption in the base station cooperative computing model is proposed. Firstly, we establish a collaborative offloading model involving multiple vehicles, multiple base stations, and multiple edge servers. Transferring vehicular applications to the application queue of edge servers and prioritizing them based on their completion deadlines. Secondly, each vehicular application is modeled as a directed acyclic graph (DAG) task with data dependency relationships. Subsequently, we propose a task offloading method based on task dependency awareness in deep reinforcement learning (DAG-DQN). Tasks are assigned to edge servers at different base stations, and edge servers collaborate to process tasks, minimizing vehicle application completion time and reducing edge server energy consumption. Finally, simulation results show that compared with the heuristic method, our proposed DAG-DQN method reduces task completion time by 16%, reduces system energy consumption by 19%, and improves decision-making efficiency by 70%.