基于强化学习的5G网络D2D通信高效功率控制与频谱利用

Q1 Mathematics
Chellarao Chowdary Mallipudi, S. Chandra, Prateek Prakash, Rajeev Arya, Akhtar Husain, S. Qamar
{"title":"基于强化学习的5G网络D2D通信高效功率控制与频谱利用","authors":"Chellarao Chowdary Mallipudi, S. Chandra, Prateek Prakash, Rajeev Arya, Akhtar Husain, S. Qamar","doi":"10.5815/ijcnis.2023.04.02","DOIUrl":null,"url":null,"abstract":"There are billions of inter-connected devices by the help of Internet-of-Things (IoT) that have been used in a number of applications such as for wearable devices, e-healthcare, agriculture, transportation, etc. Interconnection of devices establishes a direct link and easily shares the information by utilizing the spectrum of cellular users to enhance the spectral efficiency with low power consumption in an underlaid Device-to-Device (D2D) communication. Due to reuse of the spectrum of cellular devices by D2D users causes severe interference between them which may impact on the network performance. Therefore, we proposed a Q-Learning based low power selection scheme with the help of multi-agent reinforcement learning to detract the interference that helps to increase the capacity of the D2D network. For the maximization of capacity, the updated reward function has been reformulated with the help of a stochastic policy environment. With the help of a stochastic approach, we figure out the proposed optimal low power consumption techniques which ensures the quality of service (QoS) standards of the cellular devices and D2D users for D2D communication in 5G Networks and increase the utilization of resources. Numerical results confirm that the proposed scheme improves the spectral efficiency and sum rate as compared to Q-Learning approach by 14% and 12.65%.","PeriodicalId":36488,"journal":{"name":"International Journal of Computer Network and Information Security","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Reinforcement Learning Based Efficient Power Control and Spectrum Utilization for D2D Communication in 5G Network\",\"authors\":\"Chellarao Chowdary Mallipudi, S. Chandra, Prateek Prakash, Rajeev Arya, Akhtar Husain, S. Qamar\",\"doi\":\"10.5815/ijcnis.2023.04.02\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There are billions of inter-connected devices by the help of Internet-of-Things (IoT) that have been used in a number of applications such as for wearable devices, e-healthcare, agriculture, transportation, etc. Interconnection of devices establishes a direct link and easily shares the information by utilizing the spectrum of cellular users to enhance the spectral efficiency with low power consumption in an underlaid Device-to-Device (D2D) communication. Due to reuse of the spectrum of cellular devices by D2D users causes severe interference between them which may impact on the network performance. Therefore, we proposed a Q-Learning based low power selection scheme with the help of multi-agent reinforcement learning to detract the interference that helps to increase the capacity of the D2D network. For the maximization of capacity, the updated reward function has been reformulated with the help of a stochastic policy environment. With the help of a stochastic approach, we figure out the proposed optimal low power consumption techniques which ensures the quality of service (QoS) standards of the cellular devices and D2D users for D2D communication in 5G Networks and increase the utilization of resources. Numerical results confirm that the proposed scheme improves the spectral efficiency and sum rate as compared to Q-Learning approach by 14% and 12.65%.\",\"PeriodicalId\":36488,\"journal\":{\"name\":\"International Journal of Computer Network and Information Security\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computer Network and Information Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5815/ijcnis.2023.04.02\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Mathematics\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Network and Information Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5815/ijcnis.2023.04.02","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Mathematics","Score":null,"Total":0}
引用次数: 1

摘要

在物联网(IoT)的帮助下,数十亿个相互连接的设备已被用于许多应用,如可穿戴设备、电子医疗、农业、交通等。在底层设备对设备(D2D)通信中,设备互连利用蜂窝用户的频谱建立直接链接,方便地共享信息,以低功耗提高频谱效率。由于D2D用户对蜂窝设备频谱的重复使用,会造成蜂窝设备之间的严重干扰,影响网络性能。因此,我们提出了一种基于Q-Learning的低功耗选择方案,并借助多智能体强化学习来减少干扰,从而有助于提高D2D网络的容量。为了使容量最大化,利用随机政策环境,重新制定了更新后的奖励函数。在此基础上,利用随机方法,给出了在保证蜂窝设备和D2D用户在5G网络中进行D2D通信的服务质量(QoS)标准和提高资源利用率的低功耗优化技术。数值结果表明,与Q-Learning方法相比,该方法的频谱效率和求和速率分别提高了14%和12.65%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reinforcement Learning Based Efficient Power Control and Spectrum Utilization for D2D Communication in 5G Network
There are billions of inter-connected devices by the help of Internet-of-Things (IoT) that have been used in a number of applications such as for wearable devices, e-healthcare, agriculture, transportation, etc. Interconnection of devices establishes a direct link and easily shares the information by utilizing the spectrum of cellular users to enhance the spectral efficiency with low power consumption in an underlaid Device-to-Device (D2D) communication. Due to reuse of the spectrum of cellular devices by D2D users causes severe interference between them which may impact on the network performance. Therefore, we proposed a Q-Learning based low power selection scheme with the help of multi-agent reinforcement learning to detract the interference that helps to increase the capacity of the D2D network. For the maximization of capacity, the updated reward function has been reformulated with the help of a stochastic policy environment. With the help of a stochastic approach, we figure out the proposed optimal low power consumption techniques which ensures the quality of service (QoS) standards of the cellular devices and D2D users for D2D communication in 5G Networks and increase the utilization of resources. Numerical results confirm that the proposed scheme improves the spectral efficiency and sum rate as compared to Q-Learning approach by 14% and 12.65%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.10
自引率
0.00%
发文量
33
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信