{"title":"Effect of reinforcement learning on routing of cognitive radio ad-hoc networks","authors":"Tauqeer Safdar, Halabi B. Hasbulah, Maaz Rehan","doi":"10.1109/ISMSC.2015.7594025","DOIUrl":null,"url":null,"abstract":"Today's network control systems have very limited ability to adapt the changes in network. The addition of reinforcement learning (RL) based network management agents can improve Quality of Service (QoS) by reconfiguring the network layer protocol parameters in response to observed network performance conditions. This paper presents a closed-loop approach to tuning the parameters of the protocol of network layer based on current and previous network state observation for user and channel interference, specifically by modifying some parameters of Ad-Hoc On-Demand Distance Vector (AODV) routing protocol for Cognitive Radio Ad-Hoc Network (CRAHN) environment. In this work, we provide a self-contained learning method based on machine-learning techniques that have been or can be used for developing cognitive routing protocols. Generally, the developed mathematical model based on the one RL technique to handle the route decision in channel switching and user mobility situation so that the overall end-to-end delay can be minimized and the overall throughput of the network can be maximized according to the application requirement in CRAHN environment. Here is the proposed self-configuration method based on RL technique can improve the performance of the original AODV protocol, reducing protocol overhead and end-to-end delay for CRAHN while increasing the packet delivery ratio depending upon the traffic model. Simulation results are shown using NS-2 which shows the proposed model performance is much better than the previous AODV protocol.","PeriodicalId":407600,"journal":{"name":"2015 International Symposium on Mathematical Sciences and Computing Research (iSMSC)","volume":"33 7-8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Symposium on Mathematical Sciences and Computing Research (iSMSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMSC.2015.7594025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Today's network control systems have very limited ability to adapt the changes in network. The addition of reinforcement learning (RL) based network management agents can improve Quality of Service (QoS) by reconfiguring the network layer protocol parameters in response to observed network performance conditions. This paper presents a closed-loop approach to tuning the parameters of the protocol of network layer based on current and previous network state observation for user and channel interference, specifically by modifying some parameters of Ad-Hoc On-Demand Distance Vector (AODV) routing protocol for Cognitive Radio Ad-Hoc Network (CRAHN) environment. In this work, we provide a self-contained learning method based on machine-learning techniques that have been or can be used for developing cognitive routing protocols. Generally, the developed mathematical model based on the one RL technique to handle the route decision in channel switching and user mobility situation so that the overall end-to-end delay can be minimized and the overall throughput of the network can be maximized according to the application requirement in CRAHN environment. Here is the proposed self-configuration method based on RL technique can improve the performance of the original AODV protocol, reducing protocol overhead and end-to-end delay for CRAHN while increasing the packet delivery ratio depending upon the traffic model. Simulation results are shown using NS-2 which shows the proposed model performance is much better than the previous AODV protocol.