Amir Masoud Rahmani , Amir Haider , Monji Mohamed Zaidi , Abed Alanazi , Shtwai Alsubai , Abdullah Alqahtani , Mohammad Sadegh Yousefpoor , Efat Yousefpoor , Mehdi Hosseinzadeh
{"title":"QRCF:一种新的基于q学习的路由方法,在飞行自组织网络中使用智能圆柱滤波系统","authors":"Amir Masoud Rahmani , Amir Haider , Monji Mohamed Zaidi , Abed Alanazi , Shtwai Alsubai , Abdullah Alqahtani , Mohammad Sadegh Yousefpoor , Efat Yousefpoor , Mehdi Hosseinzadeh","doi":"10.1016/j.vehcom.2025.100905","DOIUrl":null,"url":null,"abstract":"<div><div>To ensure reliable data transmission in flying ad hoc networks (FANETs), efficient routing protocols are necessary to establish communication paths in FANETs. Recently, reinforcement learning (RL), particularly Q-learning, has become a promising approach for overcoming challenges faced by traditional routing protocols due to its capacity for autonomous adaptation and self-learning. This study presents a Q-learning-based routing strategy, enhanced by an innovative cylindrical filtering technique, named QRCF in FANETs. In QRCF, the dissemination interval of hello packets is adaptively adjusted based on the connection status of nearby UAVs. Then, this routing process leverages Q-learning to discover reliable and stable routes, using a state set refined by the cylindrical filtering technique to accelerate the search for the optimal path in the network. Afterward, the reward value is computed using metrics such as relative speed, connection time, residual energy, and movement path. Finally, QRCF is deployed in the network simulator 2 (NS2), and its performance is evaluated against three routing schemes, QRF, QFAN, and QTAR. These evaluations are presented based on the number of UAVs and their speed. In general, when changing the number of nodes, QRCF improves energy usage (about 5.01%), data delivery ratio (approximately 1.20%), delay (17.71%), and network longevity (about 3.21%). However, it has a higher overhead (approximately 10.91%) than QRF. Moreover, when changing the speed of UAVs in the network, QRCF improves energy usage (about 4.94%), data delivery ratio (approximately 2.36%), delay (about 17.5%), and network lifetime (approximately 8.75%). However, it increases routing overhead (approximately 15.47%) in comparison with QRF.</div></div>","PeriodicalId":54346,"journal":{"name":"Vehicular Communications","volume":"53 ","pages":"Article 100905"},"PeriodicalIF":5.8000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"QRCF: A new Q-learning-based routing approach using a smart cylindrical filtering system in flying ad hoc networks\",\"authors\":\"Amir Masoud Rahmani , Amir Haider , Monji Mohamed Zaidi , Abed Alanazi , Shtwai Alsubai , Abdullah Alqahtani , Mohammad Sadegh Yousefpoor , Efat Yousefpoor , Mehdi Hosseinzadeh\",\"doi\":\"10.1016/j.vehcom.2025.100905\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To ensure reliable data transmission in flying ad hoc networks (FANETs), efficient routing protocols are necessary to establish communication paths in FANETs. Recently, reinforcement learning (RL), particularly Q-learning, has become a promising approach for overcoming challenges faced by traditional routing protocols due to its capacity for autonomous adaptation and self-learning. This study presents a Q-learning-based routing strategy, enhanced by an innovative cylindrical filtering technique, named QRCF in FANETs. In QRCF, the dissemination interval of hello packets is adaptively adjusted based on the connection status of nearby UAVs. Then, this routing process leverages Q-learning to discover reliable and stable routes, using a state set refined by the cylindrical filtering technique to accelerate the search for the optimal path in the network. Afterward, the reward value is computed using metrics such as relative speed, connection time, residual energy, and movement path. Finally, QRCF is deployed in the network simulator 2 (NS2), and its performance is evaluated against three routing schemes, QRF, QFAN, and QTAR. These evaluations are presented based on the number of UAVs and their speed. In general, when changing the number of nodes, QRCF improves energy usage (about 5.01%), data delivery ratio (approximately 1.20%), delay (17.71%), and network longevity (about 3.21%). However, it has a higher overhead (approximately 10.91%) than QRF. Moreover, when changing the speed of UAVs in the network, QRCF improves energy usage (about 4.94%), data delivery ratio (approximately 2.36%), delay (about 17.5%), and network lifetime (approximately 8.75%). However, it increases routing overhead (approximately 15.47%) in comparison with QRF.</div></div>\",\"PeriodicalId\":54346,\"journal\":{\"name\":\"Vehicular Communications\",\"volume\":\"53 \",\"pages\":\"Article 100905\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2025-04-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Vehicular Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214209625000324\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vehicular Communications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214209625000324","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
QRCF: A new Q-learning-based routing approach using a smart cylindrical filtering system in flying ad hoc networks
To ensure reliable data transmission in flying ad hoc networks (FANETs), efficient routing protocols are necessary to establish communication paths in FANETs. Recently, reinforcement learning (RL), particularly Q-learning, has become a promising approach for overcoming challenges faced by traditional routing protocols due to its capacity for autonomous adaptation and self-learning. This study presents a Q-learning-based routing strategy, enhanced by an innovative cylindrical filtering technique, named QRCF in FANETs. In QRCF, the dissemination interval of hello packets is adaptively adjusted based on the connection status of nearby UAVs. Then, this routing process leverages Q-learning to discover reliable and stable routes, using a state set refined by the cylindrical filtering technique to accelerate the search for the optimal path in the network. Afterward, the reward value is computed using metrics such as relative speed, connection time, residual energy, and movement path. Finally, QRCF is deployed in the network simulator 2 (NS2), and its performance is evaluated against three routing schemes, QRF, QFAN, and QTAR. These evaluations are presented based on the number of UAVs and their speed. In general, when changing the number of nodes, QRCF improves energy usage (about 5.01%), data delivery ratio (approximately 1.20%), delay (17.71%), and network longevity (about 3.21%). However, it has a higher overhead (approximately 10.91%) than QRF. Moreover, when changing the speed of UAVs in the network, QRCF improves energy usage (about 4.94%), data delivery ratio (approximately 2.36%), delay (about 17.5%), and network lifetime (approximately 8.75%). However, it increases routing overhead (approximately 15.47%) in comparison with QRF.
期刊介绍:
Vehicular communications is a growing area of communications between vehicles and including roadside communication infrastructure. Advances in wireless communications are making possible sharing of information through real time communications between vehicles and infrastructure. This has led to applications to increase safety of vehicles and communication between passengers and the Internet. Standardization efforts on vehicular communication are also underway to make vehicular transportation safer, greener and easier.
The aim of the journal is to publish high quality peer–reviewed papers in the area of vehicular communications. The scope encompasses all types of communications involving vehicles, including vehicle–to–vehicle and vehicle–to–infrastructure. The scope includes (but not limited to) the following topics related to vehicular communications:
Vehicle to vehicle and vehicle to infrastructure communications
Channel modelling, modulating and coding
Congestion Control and scalability issues
Protocol design, testing and verification
Routing in vehicular networks
Security issues and countermeasures
Deployment and field testing
Reducing energy consumption and enhancing safety of vehicles
Wireless in–car networks
Data collection and dissemination methods
Mobility and handover issues
Safety and driver assistance applications
UAV
Underwater communications
Autonomous cooperative driving
Social networks
Internet of vehicles
Standardization of protocols.