基于深度强化学习的V2X通信网络分布式信道复用框架

R. Hu, Xinguo Wang, Yuyuan Su, Bin Yang
{"title":"基于深度强化学习的V2X通信网络分布式信道复用框架","authors":"R. Hu, Xinguo Wang, Yuyuan Su, Bin Yang","doi":"10.1109/ICCECE51280.2021.9342305","DOIUrl":null,"url":null,"abstract":"It is crucial to multiplex channel resources efficiently in wireless networks due to the link interference and wireless spectrum scarcity. In this paper, we study the allocation problem of channel resources in Vehicle-to-Everything communication networks. We model this problem as a decentralized Markov Decision Process, where each V2V Agent independently decides its channel and power level based on the local environmental observations and global network reward. Then, a multi-agent distributed channel resource multiplexing framework based on Deep Reinforcement Learning is proposed to derive the best joint resources allocation solution. Furthermore, Prioritized DDQN algorithm is used to provide a more accurate estimation target for the action evaluation and can effectively reduce Q-Values’ overestimation. The extensive experimental results show that the proposed framework can achieve better performances than the existing works in terms of both the capacity sum of V2I channels and the package delivery success ratios of V2V links.","PeriodicalId":229425,"journal":{"name":"2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"An Efficient Deep Reinforcement Learning Based Distributed Channel Multiplexing Framework for V2X Communication Networks\",\"authors\":\"R. Hu, Xinguo Wang, Yuyuan Su, Bin Yang\",\"doi\":\"10.1109/ICCECE51280.2021.9342305\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"It is crucial to multiplex channel resources efficiently in wireless networks due to the link interference and wireless spectrum scarcity. In this paper, we study the allocation problem of channel resources in Vehicle-to-Everything communication networks. We model this problem as a decentralized Markov Decision Process, where each V2V Agent independently decides its channel and power level based on the local environmental observations and global network reward. Then, a multi-agent distributed channel resource multiplexing framework based on Deep Reinforcement Learning is proposed to derive the best joint resources allocation solution. Furthermore, Prioritized DDQN algorithm is used to provide a more accurate estimation target for the action evaluation and can effectively reduce Q-Values’ overestimation. The extensive experimental results show that the proposed framework can achieve better performances than the existing works in terms of both the capacity sum of V2I channels and the package delivery success ratios of V2V links.\",\"PeriodicalId\":229425,\"journal\":{\"name\":\"2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE)\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCECE51280.2021.9342305\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCECE51280.2021.9342305","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

在无线网络中,由于链路干扰和无线频谱的稀缺性,有效地复用信道资源至关重要。本文研究了车对万物通信网络中信道资源的分配问题。我们将这个问题建模为一个分散的马尔可夫决策过程,其中每个V2V代理根据局部环境观察和全局网络奖励独立决定其通道和功率水平。然后,提出了一种基于深度强化学习的多智能体分布式信道资源复用框架,推导出最佳联合资源分配方案;此外,采用了优先级DDQN算法,为动作评估提供了更准确的估计目标,可以有效减少q值的高估。大量的实验结果表明,该框架在V2I信道容量总和和V2V链路的包投递成功率方面都比现有的框架具有更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An Efficient Deep Reinforcement Learning Based Distributed Channel Multiplexing Framework for V2X Communication Networks
It is crucial to multiplex channel resources efficiently in wireless networks due to the link interference and wireless spectrum scarcity. In this paper, we study the allocation problem of channel resources in Vehicle-to-Everything communication networks. We model this problem as a decentralized Markov Decision Process, where each V2V Agent independently decides its channel and power level based on the local environmental observations and global network reward. Then, a multi-agent distributed channel resource multiplexing framework based on Deep Reinforcement Learning is proposed to derive the best joint resources allocation solution. Furthermore, Prioritized DDQN algorithm is used to provide a more accurate estimation target for the action evaluation and can effectively reduce Q-Values’ overestimation. The extensive experimental results show that the proposed framework can achieve better performances than the existing works in terms of both the capacity sum of V2I channels and the package delivery success ratios of V2V links.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信