{"title":"Research on Task Offloading and Typical Application Based on Deep Reinforcement Learning and Device-Edge-Cloud Collaboration","authors":"Lingqiu Zeng, Han Hu, Qingwen Han, L. Ye, Yu Lei","doi":"10.1109/ANZCC59813.2024.10432815","DOIUrl":null,"url":null,"abstract":"The ever evolving intelligent transportation systems may be able to provide low latency and high-quality service for intelligent connected vehicles (ICVs) on the basis of device-edge-cloud architecture. To match the requirement of vehicle-oriented task computing, the task offloading technology has received extensive attention, while making correct and fast offloading decisions to improve highly dynamic vehicular users’ experience is still a considerable challenge. In this paper, we study a device-edge-cloud architecture, where tasks from vehicles can be partially offloaded with a dynamically offloading proportion. To deal with this problem, we firstly introduce SPSO (serial particle swarm optimization) algorithm to search optimal connected MEC (Multi-Access Edge Computing) node. Then we further design a novel offloading strategy based on the deep Q network (DQN), prioritized experience replay based double deep Q-learning network (PERDDQN), which considers priority weight of the sample and sampling probability in loss function definition. A typical complex task, bus remote takeover, is selected to verify the performance of proposed approach. Simulation results show that PERDDQN has lower system cost, faster convergence speed and higher task success rate than the other comparison algorithms.","PeriodicalId":518506,"journal":{"name":"2024 Australian & New Zealand Control Conference (ANZCC)","volume":"1427 ","pages":"13-18"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2024 Australian & New Zealand Control Conference (ANZCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ANZCC59813.2024.10432815","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The ever evolving intelligent transportation systems may be able to provide low latency and high-quality service for intelligent connected vehicles (ICVs) on the basis of device-edge-cloud architecture. To match the requirement of vehicle-oriented task computing, the task offloading technology has received extensive attention, while making correct and fast offloading decisions to improve highly dynamic vehicular users’ experience is still a considerable challenge. In this paper, we study a device-edge-cloud architecture, where tasks from vehicles can be partially offloaded with a dynamically offloading proportion. To deal with this problem, we firstly introduce SPSO (serial particle swarm optimization) algorithm to search optimal connected MEC (Multi-Access Edge Computing) node. Then we further design a novel offloading strategy based on the deep Q network (DQN), prioritized experience replay based double deep Q-learning network (PERDDQN), which considers priority weight of the sample and sampling probability in loss function definition. A typical complex task, bus remote takeover, is selected to verify the performance of proposed approach. Simulation results show that PERDDQN has lower system cost, faster convergence speed and higher task success rate than the other comparison algorithms.