Effect of reinforcement learning on routing of cognitive radio ad-hoc networks

Tauqeer Safdar, Halabi B. Hasbulah, Maaz Rehan
{"title":"Effect of reinforcement learning on routing of cognitive radio ad-hoc networks","authors":"Tauqeer Safdar, Halabi B. Hasbulah, Maaz Rehan","doi":"10.1109/ISMSC.2015.7594025","DOIUrl":null,"url":null,"abstract":"Today's network control systems have very limited ability to adapt the changes in network. The addition of reinforcement learning (RL) based network management agents can improve Quality of Service (QoS) by reconfiguring the network layer protocol parameters in response to observed network performance conditions. This paper presents a closed-loop approach to tuning the parameters of the protocol of network layer based on current and previous network state observation for user and channel interference, specifically by modifying some parameters of Ad-Hoc On-Demand Distance Vector (AODV) routing protocol for Cognitive Radio Ad-Hoc Network (CRAHN) environment. In this work, we provide a self-contained learning method based on machine-learning techniques that have been or can be used for developing cognitive routing protocols. Generally, the developed mathematical model based on the one RL technique to handle the route decision in channel switching and user mobility situation so that the overall end-to-end delay can be minimized and the overall throughput of the network can be maximized according to the application requirement in CRAHN environment. Here is the proposed self-configuration method based on RL technique can improve the performance of the original AODV protocol, reducing protocol overhead and end-to-end delay for CRAHN while increasing the packet delivery ratio depending upon the traffic model. Simulation results are shown using NS-2 which shows the proposed model performance is much better than the previous AODV protocol.","PeriodicalId":407600,"journal":{"name":"2015 International Symposium on Mathematical Sciences and Computing Research (iSMSC)","volume":"33 7-8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Symposium on Mathematical Sciences and Computing Research (iSMSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMSC.2015.7594025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Today's network control systems have very limited ability to adapt the changes in network. The addition of reinforcement learning (RL) based network management agents can improve Quality of Service (QoS) by reconfiguring the network layer protocol parameters in response to observed network performance conditions. This paper presents a closed-loop approach to tuning the parameters of the protocol of network layer based on current and previous network state observation for user and channel interference, specifically by modifying some parameters of Ad-Hoc On-Demand Distance Vector (AODV) routing protocol for Cognitive Radio Ad-Hoc Network (CRAHN) environment. In this work, we provide a self-contained learning method based on machine-learning techniques that have been or can be used for developing cognitive routing protocols. Generally, the developed mathematical model based on the one RL technique to handle the route decision in channel switching and user mobility situation so that the overall end-to-end delay can be minimized and the overall throughput of the network can be maximized according to the application requirement in CRAHN environment. Here is the proposed self-configuration method based on RL technique can improve the performance of the original AODV protocol, reducing protocol overhead and end-to-end delay for CRAHN while increasing the packet delivery ratio depending upon the traffic model. Simulation results are shown using NS-2 which shows the proposed model performance is much better than the previous AODV protocol.
强化学习对认知无线电自组织网络路由的影响
目前的网络控制系统对网络变化的适应能力非常有限。添加基于强化学习(RL)的网络管理代理可以通过根据观察到的网络性能条件重新配置网络层协议参数来提高服务质量(QoS)。本文提出了一种基于当前和以前的网络状态观测,针对用户和信道干扰,通过修改认知无线电自组织网络(CRAHN)环境下的AODV路由协议的一些参数,实现网络层协议参数的闭环调优的方法。在这项工作中,我们提供了一种基于机器学习技术的自包含学习方法,该技术已经或可以用于开发认知路由协议。通常,根据CRAHN环境下的应用需求,建立的数学模型基于一RL技术来处理信道交换和用户移动情况下的路由决策,使网络的整体端到端延迟最小化,整体吞吐量最大化。本文提出的基于RL技术的自配置方法可以提高原有AODV协议的性能,减少CRAHN的协议开销和端到端延迟,同时根据流量模型提高分组投递率。利用NS-2进行了仿真,结果表明该模型的性能明显优于原有的AODV协议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信