{"title":"A Reinforcement Learning-based Radio Resource Management Algorithm for D2D-based V2V Communication","authors":"S. Feki, A. Belghith, F. Zarai","doi":"10.1109/IWCMC.2019.8766509","DOIUrl":null,"url":null,"abstract":"Device-to-Device (D2D) communication is an emergent technology that provides many advantages for the LTE-A networks as higher spectral efficiency and wireless Peer-to-Peer services. It is considered as a promising technology used in many different fields like public safety, network traffic offloading, and social applications and services. However, the integration of D2D communications in cellular networks creates two main challenges. First, the interference caused by the D2D links to the cellular links could significantly affect the performance of the cellular devices. Second, the minimum QoS requirements of D2D communications need to be guaranteed. Thus, the synchronization between devices becomes a necessity while Radio Resource Management (RRM) always represents a challenge. In this paper, we study the RRM problem for Vehicle-to-Vehicle (V2V) communication. A dynamic neural Q-learning-based resource allocation and resource sharing algorithm is proposed for D2D-based V2V communication in the LTE-A cellular networks. Simulation results show that the proposed algorithm is able to offer the best-performing allocations to improve network performance.","PeriodicalId":363800,"journal":{"name":"2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IWCMC.2019.8766509","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Device-to-Device (D2D) communication is an emergent technology that provides many advantages for the LTE-A networks as higher spectral efficiency and wireless Peer-to-Peer services. It is considered as a promising technology used in many different fields like public safety, network traffic offloading, and social applications and services. However, the integration of D2D communications in cellular networks creates two main challenges. First, the interference caused by the D2D links to the cellular links could significantly affect the performance of the cellular devices. Second, the minimum QoS requirements of D2D communications need to be guaranteed. Thus, the synchronization between devices becomes a necessity while Radio Resource Management (RRM) always represents a challenge. In this paper, we study the RRM problem for Vehicle-to-Vehicle (V2V) communication. A dynamic neural Q-learning-based resource allocation and resource sharing algorithm is proposed for D2D-based V2V communication in the LTE-A cellular networks. Simulation results show that the proposed algorithm is able to offer the best-performing allocations to improve network performance.