Yaping Cui, Xinyun Huang, P. He, D. Wu, Ruyang Wang
{"title":"车辆网络切片中的双时间尺度资源分配方案","authors":"Yaping Cui, Xinyun Huang, P. He, D. Wu, Ruyang Wang","doi":"10.1109/VTC2021-Spring51267.2021.9448852","DOIUrl":null,"url":null,"abstract":"Network slicing can support the diverse use cases with heterogeneous requirements, and has been considered as one of the key roles in future networks. However, as the dynamic traffic demands and the mobility in vehicular networks, how to perform RAN slicing efficiently to provide stable quality of service (QoS) for connected vehicles is still a challenge. In order to meet the diversified service request of vehicles in such a dynamic vehicular environment, in this paper, we propose a two-timescale radio resource allocation scheme, namely, LSTM-DDPG, to provide stable service for vehicles. Specifically, for the long-term dynamic characteristics of service request from vehicles, we use long short-term memory (LSTM) to follow the tracks, such that the dedicated resource allocation is executed in a long timescale by using historical data. On the other hand, for the impacts of channel changes caused by high-speed movement in a short period, a deep reinforcement learning (DRL) algorithm, i.e., deep deterministic policy gradient (DDPG), is leveraged to adjust the allocated resources. We prove the effectiveness of the proposed LSTM-DDPG with simulation results, the cumulative probability that the slice supplies a stable performance to the served vehicle within the resource scheduling interval can reach more than 90%. Compared with the conventional deep Q-networks (DQN), the average cumulative probability has increased by 27.8%.","PeriodicalId":194840,"journal":{"name":"2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A Two-Timescale Resource Allocation Scheme in Vehicular Network Slicing\",\"authors\":\"Yaping Cui, Xinyun Huang, P. He, D. Wu, Ruyang Wang\",\"doi\":\"10.1109/VTC2021-Spring51267.2021.9448852\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Network slicing can support the diverse use cases with heterogeneous requirements, and has been considered as one of the key roles in future networks. However, as the dynamic traffic demands and the mobility in vehicular networks, how to perform RAN slicing efficiently to provide stable quality of service (QoS) for connected vehicles is still a challenge. In order to meet the diversified service request of vehicles in such a dynamic vehicular environment, in this paper, we propose a two-timescale radio resource allocation scheme, namely, LSTM-DDPG, to provide stable service for vehicles. Specifically, for the long-term dynamic characteristics of service request from vehicles, we use long short-term memory (LSTM) to follow the tracks, such that the dedicated resource allocation is executed in a long timescale by using historical data. On the other hand, for the impacts of channel changes caused by high-speed movement in a short period, a deep reinforcement learning (DRL) algorithm, i.e., deep deterministic policy gradient (DDPG), is leveraged to adjust the allocated resources. We prove the effectiveness of the proposed LSTM-DDPG with simulation results, the cumulative probability that the slice supplies a stable performance to the served vehicle within the resource scheduling interval can reach more than 90%. Compared with the conventional deep Q-networks (DQN), the average cumulative probability has increased by 27.8%.\",\"PeriodicalId\":194840,\"journal\":{\"name\":\"2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VTC2021-Spring51267.2021.9448852\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VTC2021-Spring51267.2021.9448852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Two-Timescale Resource Allocation Scheme in Vehicular Network Slicing
Network slicing can support the diverse use cases with heterogeneous requirements, and has been considered as one of the key roles in future networks. However, as the dynamic traffic demands and the mobility in vehicular networks, how to perform RAN slicing efficiently to provide stable quality of service (QoS) for connected vehicles is still a challenge. In order to meet the diversified service request of vehicles in such a dynamic vehicular environment, in this paper, we propose a two-timescale radio resource allocation scheme, namely, LSTM-DDPG, to provide stable service for vehicles. Specifically, for the long-term dynamic characteristics of service request from vehicles, we use long short-term memory (LSTM) to follow the tracks, such that the dedicated resource allocation is executed in a long timescale by using historical data. On the other hand, for the impacts of channel changes caused by high-speed movement in a short period, a deep reinforcement learning (DRL) algorithm, i.e., deep deterministic policy gradient (DDPG), is leveraged to adjust the allocated resources. We prove the effectiveness of the proposed LSTM-DDPG with simulation results, the cumulative probability that the slice supplies a stable performance to the served vehicle within the resource scheduling interval can reach more than 90%. Compared with the conventional deep Q-networks (DQN), the average cumulative probability has increased by 27.8%.