A Two-Timescale Resource Allocation Scheme in Vehicular Network Slicing

Yaping Cui, Xinyun Huang, P. He, D. Wu, Ruyang Wang
{"title":"A Two-Timescale Resource Allocation Scheme in Vehicular Network Slicing","authors":"Yaping Cui, Xinyun Huang, P. He, D. Wu, Ruyang Wang","doi":"10.1109/VTC2021-Spring51267.2021.9448852","DOIUrl":null,"url":null,"abstract":"Network slicing can support the diverse use cases with heterogeneous requirements, and has been considered as one of the key roles in future networks. However, as the dynamic traffic demands and the mobility in vehicular networks, how to perform RAN slicing efficiently to provide stable quality of service (QoS) for connected vehicles is still a challenge. In order to meet the diversified service request of vehicles in such a dynamic vehicular environment, in this paper, we propose a two-timescale radio resource allocation scheme, namely, LSTM-DDPG, to provide stable service for vehicles. Specifically, for the long-term dynamic characteristics of service request from vehicles, we use long short-term memory (LSTM) to follow the tracks, such that the dedicated resource allocation is executed in a long timescale by using historical data. On the other hand, for the impacts of channel changes caused by high-speed movement in a short period, a deep reinforcement learning (DRL) algorithm, i.e., deep deterministic policy gradient (DDPG), is leveraged to adjust the allocated resources. We prove the effectiveness of the proposed LSTM-DDPG with simulation results, the cumulative probability that the slice supplies a stable performance to the served vehicle within the resource scheduling interval can reach more than 90%. Compared with the conventional deep Q-networks (DQN), the average cumulative probability has increased by 27.8%.","PeriodicalId":194840,"journal":{"name":"2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VTC2021-Spring51267.2021.9448852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Network slicing can support the diverse use cases with heterogeneous requirements, and has been considered as one of the key roles in future networks. However, as the dynamic traffic demands and the mobility in vehicular networks, how to perform RAN slicing efficiently to provide stable quality of service (QoS) for connected vehicles is still a challenge. In order to meet the diversified service request of vehicles in such a dynamic vehicular environment, in this paper, we propose a two-timescale radio resource allocation scheme, namely, LSTM-DDPG, to provide stable service for vehicles. Specifically, for the long-term dynamic characteristics of service request from vehicles, we use long short-term memory (LSTM) to follow the tracks, such that the dedicated resource allocation is executed in a long timescale by using historical data. On the other hand, for the impacts of channel changes caused by high-speed movement in a short period, a deep reinforcement learning (DRL) algorithm, i.e., deep deterministic policy gradient (DDPG), is leveraged to adjust the allocated resources. We prove the effectiveness of the proposed LSTM-DDPG with simulation results, the cumulative probability that the slice supplies a stable performance to the served vehicle within the resource scheduling interval can reach more than 90%. Compared with the conventional deep Q-networks (DQN), the average cumulative probability has increased by 27.8%.
车辆网络切片中的双时间尺度资源分配方案
网络切片可以支持具有异构需求的各种用例,被认为是未来网络中的关键角色之一。然而,随着车辆网络的动态流量需求和移动性,如何有效地进行RAN切片,为联网车辆提供稳定的服务质量(QoS)仍然是一个挑战。为了满足动态车辆环境下车辆多样化的服务需求,本文提出了一种双时标无线电资源分配方案LSTM-DDPG,为车辆提供稳定的服务。具体而言,针对车辆服务请求的长期动态特征,我们利用长短期记忆(LSTM)跟踪轨迹,利用历史数据在长时间尺度上执行专用资源分配。另一方面,针对短时间内高速移动造成的通道变化的影响,利用深度强化学习(DRL)算法,即深度确定性策略梯度(deep deterministic policy gradient, DDPG)来调整资源分配。仿真结果证明了所提出的LSTM-DDPG算法的有效性,在资源调度区间内,切片为服务车辆提供稳定性能的累积概率可达90%以上。与传统的深度q网络(DQN)相比,平均累积概率提高了27.8%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信