Mehmet Ariman;Mertkan Akkoç;Talip Tolga Sari;Muhammed Raşit Erol;Gökhan Seçinti;Berk Canberk
{"title":"基于高效节能rl的灾区空中网络部署试验台","authors":"Mehmet Ariman;Mertkan Akkoç;Talip Tolga Sari;Muhammed Raşit Erol;Gökhan Seçinti;Berk Canberk","doi":"10.23919/JCN.2022.000057","DOIUrl":null,"url":null,"abstract":"Rapid deployment of wireless devices with 5G and beyond enabled a connected world. However, an immediate demand increase right after a disaster paralyzes network infrastructure temporarily. The continuous flow of information is crucial during disaster times to coordinate rescue operations and identify the survivors. Communication infrastructures built for users of disaster areas should satisfy rapid deployment, increased coverage, and availability. Unmanned air vehicles (UAV) provide a potential solution for rapid deployment as they are not affected by traffic jams and physical road damage during a disaster. In addition, ad-hoc WiFi communication allows the generation of broadcast domains within a clear channel which eases one-to-many communications. Moreover, using reinforcement learning (RL) helps reduce the computational cost and increases the accuracy of the NP-hard problem of aerial network deployment. To this end, a novel flying WiFi ad-hoc network management model is proposed in this paper. The model utilizes deep-Q-learning to maintain quality-of-service (QoS), increase user equipment (UE) coverage, and optimize power efficiency. Furthermore, a testbed is deployed on Istanbul Technical University (ITU) campus to train the developed model. Training results of the model using testbed accumulates over 90% packet delivery ratio as QoS, over 97% coverage for the users in flow tables, and 0.28 KJ/Bit average power consumption.","PeriodicalId":54864,"journal":{"name":"Journal of Communications and Networks","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2023-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/5449605/10077469/10012517.pdf","citationCount":"2","resultStr":"{\"title\":\"Energy-efficient RL-based aerial network deployment testbed for disaster areas\",\"authors\":\"Mehmet Ariman;Mertkan Akkoç;Talip Tolga Sari;Muhammed Raşit Erol;Gökhan Seçinti;Berk Canberk\",\"doi\":\"10.23919/JCN.2022.000057\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Rapid deployment of wireless devices with 5G and beyond enabled a connected world. However, an immediate demand increase right after a disaster paralyzes network infrastructure temporarily. The continuous flow of information is crucial during disaster times to coordinate rescue operations and identify the survivors. Communication infrastructures built for users of disaster areas should satisfy rapid deployment, increased coverage, and availability. Unmanned air vehicles (UAV) provide a potential solution for rapid deployment as they are not affected by traffic jams and physical road damage during a disaster. In addition, ad-hoc WiFi communication allows the generation of broadcast domains within a clear channel which eases one-to-many communications. Moreover, using reinforcement learning (RL) helps reduce the computational cost and increases the accuracy of the NP-hard problem of aerial network deployment. To this end, a novel flying WiFi ad-hoc network management model is proposed in this paper. The model utilizes deep-Q-learning to maintain quality-of-service (QoS), increase user equipment (UE) coverage, and optimize power efficiency. Furthermore, a testbed is deployed on Istanbul Technical University (ITU) campus to train the developed model. Training results of the model using testbed accumulates over 90% packet delivery ratio as QoS, over 97% coverage for the users in flow tables, and 0.28 KJ/Bit average power consumption.\",\"PeriodicalId\":54864,\"journal\":{\"name\":\"Journal of Communications and Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2023-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/iel7/5449605/10077469/10012517.pdf\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Communications and Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10012517/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Communications and Networks","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10012517/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Energy-efficient RL-based aerial network deployment testbed for disaster areas
Rapid deployment of wireless devices with 5G and beyond enabled a connected world. However, an immediate demand increase right after a disaster paralyzes network infrastructure temporarily. The continuous flow of information is crucial during disaster times to coordinate rescue operations and identify the survivors. Communication infrastructures built for users of disaster areas should satisfy rapid deployment, increased coverage, and availability. Unmanned air vehicles (UAV) provide a potential solution for rapid deployment as they are not affected by traffic jams and physical road damage during a disaster. In addition, ad-hoc WiFi communication allows the generation of broadcast domains within a clear channel which eases one-to-many communications. Moreover, using reinforcement learning (RL) helps reduce the computational cost and increases the accuracy of the NP-hard problem of aerial network deployment. To this end, a novel flying WiFi ad-hoc network management model is proposed in this paper. The model utilizes deep-Q-learning to maintain quality-of-service (QoS), increase user equipment (UE) coverage, and optimize power efficiency. Furthermore, a testbed is deployed on Istanbul Technical University (ITU) campus to train the developed model. Training results of the model using testbed accumulates over 90% packet delivery ratio as QoS, over 97% coverage for the users in flow tables, and 0.28 KJ/Bit average power consumption.
期刊介绍:
The JOURNAL OF COMMUNICATIONS AND NETWORKS is published six times per year, and is committed to publishing high-quality papers that advance the state-of-the-art and practical applications of communications and information networks. Theoretical research contributions presenting new techniques, concepts, or analyses, applied contributions reporting on experiences and experiments, and tutorial expositions of permanent reference value are welcome. The subjects covered by this journal include all topics in communication theory and techniques, communication systems, and information networks. COMMUNICATION THEORY AND SYSTEMS WIRELESS COMMUNICATIONS NETWORKS AND SERVICES.