{"title":"Deep Reinforcement Learning for Pre-caching and Task Allocation in Internet of Vehicles","authors":"Teng Ma, Xin Chen, Zhuo Ma, Ying Chen","doi":"10.1109/SmartIoT49966.2020.00021","DOIUrl":null,"url":null,"abstract":"With the development of Internet of Vehicles and 5G network, there is an increasing demand for services from vehicle users. Mobile edge computing offers a solution, that is, processing tasks on the edge server to improve user quality of experience (QoE). However, given the constant changes in the location of users on fast-moving vehicles, it remains a challenge on how to efficiently and stably transmit data. To address it, a method of pre-caching and task allocation based on deep reinforcement learning is proposed in this paper. The files requested by vehicle users are pre-cached on roadside units (RSUs), and transmission tasks are dynamically allocated to vehicle to vehicle (V2V) transmission and vehicle to roadside unit (V2R) transmission based on the speed of transmission. To be specific, pre-caching and task allocation are modeled as Markov decision processes (MDP). Then, Deep Deterministic Policy Gradient (DDPG) is applied to determine the optimal ratio of pre-caching and task allocation. The performance of the algorithm in different situations is analyzed through simulation and it is compared with other algorithms. It is found that DDPG can maximize the data reception rate of fast-moving vehicles, thereby improving the QoE of vehicle users.","PeriodicalId":399187,"journal":{"name":"2020 IEEE International Conference on Smart Internet of Things (SmartIoT)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Smart Internet of Things (SmartIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartIoT49966.2020.00021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
With the development of Internet of Vehicles and 5G network, there is an increasing demand for services from vehicle users. Mobile edge computing offers a solution, that is, processing tasks on the edge server to improve user quality of experience (QoE). However, given the constant changes in the location of users on fast-moving vehicles, it remains a challenge on how to efficiently and stably transmit data. To address it, a method of pre-caching and task allocation based on deep reinforcement learning is proposed in this paper. The files requested by vehicle users are pre-cached on roadside units (RSUs), and transmission tasks are dynamically allocated to vehicle to vehicle (V2V) transmission and vehicle to roadside unit (V2R) transmission based on the speed of transmission. To be specific, pre-caching and task allocation are modeled as Markov decision processes (MDP). Then, Deep Deterministic Policy Gradient (DDPG) is applied to determine the optimal ratio of pre-caching and task allocation. The performance of the algorithm in different situations is analyzed through simulation and it is compared with other algorithms. It is found that DDPG can maximize the data reception rate of fast-moving vehicles, thereby improving the QoE of vehicle users.