{"title":"Mobility Prediction Based Vehicular Edge Caching: A Deep Reinforcement Learning Based Approach","authors":"Yanxiang Guo, Zhaolong Ning, Yu-Kwong Kwok","doi":"10.1109/ICCT46805.2019.8947024","DOIUrl":null,"url":null,"abstract":"Caching on edge nodes can effectively reduce the burden on the Internet of Vehicles (IoV) networks. However, the inherent limitations of IoV networks, such as restricted storage capability of cache nodes and high mobility of vehicles may cause poor quality of services. Accurate prediction could achieve seamless switching between edge servers, reduce pre-fetch redundancy, and improve data transmission efficiency. This paper investigates how to pre-cache packets at edge nodes to speed up services to improve the user experience. We consider the trade-off between the modelling accuracy and computational complexity, and design a Markov Deep Q-Learning (MDQL) model to formulate the caching strategy. The k-order Markov model is first used to predict the mobility of vehicles, and the prediction results are used as the input of deep reinforcement learning (DRL) for training. The MDQL model can reduce the size of the action space and the computational complexity of DRL while considering the balance between the cache hit rate and the cache replacement rate. Experimental results demonstrate the effectiveness of the proposed method.","PeriodicalId":306112,"journal":{"name":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 19th International Conference on Communication Technology (ICCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCT46805.2019.8947024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Caching on edge nodes can effectively reduce the burden on the Internet of Vehicles (IoV) networks. However, the inherent limitations of IoV networks, such as restricted storage capability of cache nodes and high mobility of vehicles may cause poor quality of services. Accurate prediction could achieve seamless switching between edge servers, reduce pre-fetch redundancy, and improve data transmission efficiency. This paper investigates how to pre-cache packets at edge nodes to speed up services to improve the user experience. We consider the trade-off between the modelling accuracy and computational complexity, and design a Markov Deep Q-Learning (MDQL) model to formulate the caching strategy. The k-order Markov model is first used to predict the mobility of vehicles, and the prediction results are used as the input of deep reinforcement learning (DRL) for training. The MDQL model can reduce the size of the action space and the computational complexity of DRL while considering the balance between the cache hit rate and the cache replacement rate. Experimental results demonstrate the effectiveness of the proposed method.