认知车载网络中基于深度强化学习和新型多车辆奖励函数的动态频谱接入算法

IF 1.7 4区 计算机科学 Q3 TELECOMMUNICATIONS
Lingling Chen, Ziwei Wang, Xiaohui Zhao, Xuan Shen, Wei He
{"title":"认知车载网络中基于深度强化学习和新型多车辆奖励函数的动态频谱接入算法","authors":"Lingling Chen, Ziwei Wang, Xiaohui Zhao, Xuan Shen, Wei He","doi":"10.1007/s11235-024-01188-5","DOIUrl":null,"url":null,"abstract":"<p>As a revolution in the field of transportation, the demand for communication of vehicles is increasing. Therefore, how to improve the success rate of vehicle spectrum access has become a major problem to be solved. The case of a single vehicle accessing a channel was only considered in the previous research on dynamic spectrum access in cognitive vehicular networks (CVNs), and the spectrum resources could not be fully utilized. In order to fully utilize spectrum resources, a model for spectrum sharing among multiple secondary vehicles (SVs) and a primary vehicle (PV) is proposed. This model includes scenarios where multiple SVs share spectrum to maximize the average quality of service (QoS) for vehicles. And the condition is considered that the total interference generated by vehicles accessing the same channel is less than the interference threshold. In this paper, a deep Q-network method with a modified reward function (IDQN) algorithm is proposed to maximize the average QoS of PVs and SVs and improve spectrum utilization. The algorithm is designed with different reward functions according to the QoS of PVs and SVs under different situations. Finally, the proposed algorithm is compared with the deep Q-network (DQN) and Q-learning algorithms under the Python simulation platform. The average access success rate of SVs in the IDQN algorithm proposed can reach 98<span>\\(\\%\\)</span>, which is improved by 18<span>\\(\\%\\)</span> compared with the Q-learning algorithm. And the convergence speed is 62.5<span>\\(\\%\\)</span> faster than the DQN algorithm. At the same time, the average QoS of PVs and the average QoS of SVs in the IDQN algorithm can reach 2.4, which is improved by 50<span>\\(\\%\\)</span> and 33<span>\\(\\%\\)</span> compared with the DQN algorithm, and improved by 60<span>\\(\\%\\)</span> and 140<span>\\(\\%\\)</span> compared with the Q-learning algorithm.</p>","PeriodicalId":51194,"journal":{"name":"Telecommunication Systems","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A dynamic spectrum access algorithm based on deep reinforcement learning with novel multi-vehicle reward functions in cognitive vehicular networks\",\"authors\":\"Lingling Chen, Ziwei Wang, Xiaohui Zhao, Xuan Shen, Wei He\",\"doi\":\"10.1007/s11235-024-01188-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>As a revolution in the field of transportation, the demand for communication of vehicles is increasing. Therefore, how to improve the success rate of vehicle spectrum access has become a major problem to be solved. The case of a single vehicle accessing a channel was only considered in the previous research on dynamic spectrum access in cognitive vehicular networks (CVNs), and the spectrum resources could not be fully utilized. In order to fully utilize spectrum resources, a model for spectrum sharing among multiple secondary vehicles (SVs) and a primary vehicle (PV) is proposed. This model includes scenarios where multiple SVs share spectrum to maximize the average quality of service (QoS) for vehicles. And the condition is considered that the total interference generated by vehicles accessing the same channel is less than the interference threshold. In this paper, a deep Q-network method with a modified reward function (IDQN) algorithm is proposed to maximize the average QoS of PVs and SVs and improve spectrum utilization. The algorithm is designed with different reward functions according to the QoS of PVs and SVs under different situations. Finally, the proposed algorithm is compared with the deep Q-network (DQN) and Q-learning algorithms under the Python simulation platform. The average access success rate of SVs in the IDQN algorithm proposed can reach 98<span>\\\\(\\\\%\\\\)</span>, which is improved by 18<span>\\\\(\\\\%\\\\)</span> compared with the Q-learning algorithm. And the convergence speed is 62.5<span>\\\\(\\\\%\\\\)</span> faster than the DQN algorithm. At the same time, the average QoS of PVs and the average QoS of SVs in the IDQN algorithm can reach 2.4, which is improved by 50<span>\\\\(\\\\%\\\\)</span> and 33<span>\\\\(\\\\%\\\\)</span> compared with the DQN algorithm, and improved by 60<span>\\\\(\\\\%\\\\)</span> and 140<span>\\\\(\\\\%\\\\)</span> compared with the Q-learning algorithm.</p>\",\"PeriodicalId\":51194,\"journal\":{\"name\":\"Telecommunication Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Telecommunication Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11235-024-01188-5\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Telecommunication Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11235-024-01188-5","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

随着交通领域的变革,人们对车辆通信的需求与日俱增。因此,如何提高车辆频谱接入的成功率成为亟待解决的重大问题。以往的认知车载网络(CVN)动态频谱接入研究只考虑了单车接入信道的情况,频谱资源无法得到充分利用。为了充分利用频谱资源,本文提出了多个辅助车辆(SV)和一个主要车辆(PV)之间的频谱共享模型。该模型包括多个 SV 共享频谱以最大限度提高车辆平均服务质量(QoS)的场景。考虑的条件是,访问同一信道的车辆产生的总干扰小于干扰阈值。本文提出了一种具有修正奖励函数(IDQN)的深度 Q 网络算法,以最大化 PV 和 SV 的平均 QoS 并提高频谱利用率。该算法根据不同情况下 PV 和 SV 的 QoS 设计了不同的奖励函数。最后,在 Python 仿真平台下将所提算法与深度 Q 网络(DQN)和 Q-learning 算法进行了比较。所提出的IDQN算法中SV的平均接入成功率可达98%,与Q-learning算法相比提高了18%。收敛速度比DQN算法快62.5(\%)。同时,IDQN算法中PV的平均QoS和SV的平均QoS可以达到2.4,与DQN算法相比分别提高了50(\%)和33(\%),与Q-learning算法相比分别提高了60(\%)和140(\%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A dynamic spectrum access algorithm based on deep reinforcement learning with novel multi-vehicle reward functions in cognitive vehicular networks

A dynamic spectrum access algorithm based on deep reinforcement learning with novel multi-vehicle reward functions in cognitive vehicular networks

As a revolution in the field of transportation, the demand for communication of vehicles is increasing. Therefore, how to improve the success rate of vehicle spectrum access has become a major problem to be solved. The case of a single vehicle accessing a channel was only considered in the previous research on dynamic spectrum access in cognitive vehicular networks (CVNs), and the spectrum resources could not be fully utilized. In order to fully utilize spectrum resources, a model for spectrum sharing among multiple secondary vehicles (SVs) and a primary vehicle (PV) is proposed. This model includes scenarios where multiple SVs share spectrum to maximize the average quality of service (QoS) for vehicles. And the condition is considered that the total interference generated by vehicles accessing the same channel is less than the interference threshold. In this paper, a deep Q-network method with a modified reward function (IDQN) algorithm is proposed to maximize the average QoS of PVs and SVs and improve spectrum utilization. The algorithm is designed with different reward functions according to the QoS of PVs and SVs under different situations. Finally, the proposed algorithm is compared with the deep Q-network (DQN) and Q-learning algorithms under the Python simulation platform. The average access success rate of SVs in the IDQN algorithm proposed can reach 98\(\%\), which is improved by 18\(\%\) compared with the Q-learning algorithm. And the convergence speed is 62.5\(\%\) faster than the DQN algorithm. At the same time, the average QoS of PVs and the average QoS of SVs in the IDQN algorithm can reach 2.4, which is improved by 50\(\%\) and 33\(\%\) compared with the DQN algorithm, and improved by 60\(\%\) and 140\(\%\) compared with the Q-learning algorithm.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Telecommunication Systems
Telecommunication Systems 工程技术-电信学
CiteScore
5.40
自引率
8.00%
发文量
105
审稿时长
6.0 months
期刊介绍: Telecommunication Systems is a journal covering all aspects of modeling, analysis, design and management of telecommunication systems. The journal publishes high quality articles dealing with the use of analytic and quantitative tools for the modeling, analysis, design and management of telecommunication systems covering: Performance Evaluation of Wide Area and Local Networks; Network Interconnection; Wire, wireless, Adhoc, mobile networks; Impact of New Services (economic and organizational impact); Fiberoptics and photonic switching; DSL, ADSL, cable TV and their impact; Design and Analysis Issues in Metropolitan Area Networks; Networking Protocols; Dynamics and Capacity Expansion of Telecommunication Systems; Multimedia Based Systems, Their Design Configuration and Impact; Configuration of Distributed Systems; Pricing for Networking and Telecommunication Services; Performance Analysis of Local Area Networks; Distributed Group Decision Support Systems; Configuring Telecommunication Systems with Reliability and Availability; Cost Benefit Analysis and Economic Impact of Telecommunication Systems; Standardization and Regulatory Issues; Security, Privacy and Encryption in Telecommunication Systems; Cellular, Mobile and Satellite Based Systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信