Ying He, F. Yu, Nan Zhao, Hongxi Yin, A. Boukerche
{"title":"基于深度强化学习(DRL)的软件定义虚拟化车辆自组织网络资源管理","authors":"Ying He, F. Yu, Nan Zhao, Hongxi Yin, A. Boukerche","doi":"10.1145/3132340.3132355","DOIUrl":null,"url":null,"abstract":"Vehicular ad hoc networks (VANETs) have attracted great interests from both industry and academia. The developments of VANETs are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme.","PeriodicalId":113404,"journal":{"name":"Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":"{\"title\":\"Deep Reinforcement Learning (DRL)-based Resource Management in Software-Defined and Virtualized Vehicular Ad Hoc Networks\",\"authors\":\"Ying He, F. Yu, Nan Zhao, Hongxi Yin, A. Boukerche\",\"doi\":\"10.1145/3132340.3132355\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Vehicular ad hoc networks (VANETs) have attracted great interests from both industry and academia. The developments of VANETs are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme.\",\"PeriodicalId\":113404,\"journal\":{\"name\":\"Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3132340.3132355\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th ACM Symposium on Development and Analysis of Intelligent Vehicular Networks and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3132340.3132355","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Reinforcement Learning (DRL)-based Resource Management in Software-Defined and Virtualized Vehicular Ad Hoc Networks
Vehicular ad hoc networks (VANETs) have attracted great interests from both industry and academia. The developments of VANETs are heavily influenced by information and communications technologies, which have fueled a plethora of innovations in various areas, including networking, caching and computing. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on vehicular networks. In this paper, we propose an integrated framework that can enable dynamic orchestration of networking, caching and computing resources to improve the performance of next generation vehicular networks. We formulate the resource allocation strategy in this framework as a joint optimization problem, where the gains of not only networking but also caching and computing are taken into consideration in the proposed framework. The complexity of the system is very high when we jointly consider these three technologies. Therefore, we propose a novel deep reinforcement learning approach in this paper. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme.