{"title":"基于网络态势感知和深度强化学习的SDWN智能路由算法","authors":"Jinqiang Li;Miao Ye;Linqiang Huang;Xiaofang Deng;Hongbing Qiu;Yong Wang;Qiuxiang Jiang","doi":"10.1109/ACCESS.2023.3302178","DOIUrl":null,"url":null,"abstract":"To address the challenges of obtaining network state information, flexibly forwarding data, and improving the communication quality of service (QoS) in wireless network transmission environments in response to dynamic changes in network topology, this paper introduces an intelligent routing algorithm based on deep reinforcement learning (DRL) with network situational awareness under a software-defined wireless networking (SDWN) architecture. First, comprehensive network traffic information is collected under the SDWN architecture, and a graph convolutional network-gated recurrent unit (GCN-GRU) prediction mechanism is used to perceive future traffic trends. Second, a proximal policy optimization (PPO) DRL-based data forwarding mechanism is designed in the knowledge plane. The predicted network traffic matrix and topology information matrix are treated as the DRL environment, while next-hop adjacent nodes are treated as executable actions, and action selection policies are designed for different network conditions. To guide the learning and improvement of the DRL agent’s routing strategy, reward functions of different forms are designed by utilizing network link information and different penalty mechanisms. Additionally, importance sampling steps and gradient clipping methods are employed during gradient updating to enhance the convergence speed and stability of the designed intelligent routing method. Experimental results show that this solution outperforms traditional routing methods in network throughput, delay, packet loss rate, and wireless node distance. Compared to value-function-based Dueling Deep Q-Network (DQN) routing, the convergence of the proposed method is significantly faster and more stable. Simultaneously, hardware storage consumption is reduced, and real-time routing decisions can be made using the current network state information. The source code can be accessed at \n<uri>https://github.com/GuetYe/DRL-PPONSA</uri>\n.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"11 ","pages":"83322-83342"},"PeriodicalIF":3.4000,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/6287639/10005208/10209181.pdf","citationCount":"0","resultStr":"{\"title\":\"An Intelligent SDWN Routing Algorithm Based on Network Situational Awareness and Deep Reinforcement Learning\",\"authors\":\"Jinqiang Li;Miao Ye;Linqiang Huang;Xiaofang Deng;Hongbing Qiu;Yong Wang;Qiuxiang Jiang\",\"doi\":\"10.1109/ACCESS.2023.3302178\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To address the challenges of obtaining network state information, flexibly forwarding data, and improving the communication quality of service (QoS) in wireless network transmission environments in response to dynamic changes in network topology, this paper introduces an intelligent routing algorithm based on deep reinforcement learning (DRL) with network situational awareness under a software-defined wireless networking (SDWN) architecture. First, comprehensive network traffic information is collected under the SDWN architecture, and a graph convolutional network-gated recurrent unit (GCN-GRU) prediction mechanism is used to perceive future traffic trends. Second, a proximal policy optimization (PPO) DRL-based data forwarding mechanism is designed in the knowledge plane. The predicted network traffic matrix and topology information matrix are treated as the DRL environment, while next-hop adjacent nodes are treated as executable actions, and action selection policies are designed for different network conditions. To guide the learning and improvement of the DRL agent’s routing strategy, reward functions of different forms are designed by utilizing network link information and different penalty mechanisms. Additionally, importance sampling steps and gradient clipping methods are employed during gradient updating to enhance the convergence speed and stability of the designed intelligent routing method. Experimental results show that this solution outperforms traditional routing methods in network throughput, delay, packet loss rate, and wireless node distance. Compared to value-function-based Dueling Deep Q-Network (DQN) routing, the convergence of the proposed method is significantly faster and more stable. Simultaneously, hardware storage consumption is reduced, and real-time routing decisions can be made using the current network state information. The source code can be accessed at \\n<uri>https://github.com/GuetYe/DRL-PPONSA</uri>\\n.\",\"PeriodicalId\":13079,\"journal\":{\"name\":\"IEEE Access\",\"volume\":\"11 \",\"pages\":\"83322-83342\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2023-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/iel7/6287639/10005208/10209181.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Access\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10209181/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10209181/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
为了解决无线网络传输环境中获取网络状态信息、灵活转发数据和提高通信服务质量(QoS)以应对网络拓扑动态变化的挑战,本文在软件定义无线网络(SDWN)架构下,提出了一种基于深度强化学习(DRL)和网络态势感知的智能路由算法。首先,在SDWN架构下采集全面的网络流量信息,并利用图卷积网络门控循环单元(GCN-GRU)预测机制感知未来流量趋势;其次,在知识平面上设计了一种基于PPO drl的数据转发机制。将预测的网络流量矩阵和拓扑信息矩阵作为DRL环境,将下一跳相邻节点作为可执行动作,并针对不同的网络条件设计动作选择策略。为了指导DRL agent路由策略的学习和改进,利用网络链路信息和不同的惩罚机制,设计了不同形式的奖励函数。此外,在梯度更新过程中采用了重要采样步骤和梯度裁剪方法,提高了所设计的智能路由方法的收敛速度和稳定性。实验结果表明,该方案在网络吞吐量、时延、丢包率和无线节点距离等方面都优于传统的路由方法。与基于值函数的Dueling Deep Q-Network (DQN)路由相比,该方法的收敛速度明显加快,且更加稳定。同时,减少了硬件存储消耗,并且可以使用当前网络状态信息做出实时路由决策。源代码可以在https://github.com/GuetYe/DRL-PPONSA上访问。
An Intelligent SDWN Routing Algorithm Based on Network Situational Awareness and Deep Reinforcement Learning
To address the challenges of obtaining network state information, flexibly forwarding data, and improving the communication quality of service (QoS) in wireless network transmission environments in response to dynamic changes in network topology, this paper introduces an intelligent routing algorithm based on deep reinforcement learning (DRL) with network situational awareness under a software-defined wireless networking (SDWN) architecture. First, comprehensive network traffic information is collected under the SDWN architecture, and a graph convolutional network-gated recurrent unit (GCN-GRU) prediction mechanism is used to perceive future traffic trends. Second, a proximal policy optimization (PPO) DRL-based data forwarding mechanism is designed in the knowledge plane. The predicted network traffic matrix and topology information matrix are treated as the DRL environment, while next-hop adjacent nodes are treated as executable actions, and action selection policies are designed for different network conditions. To guide the learning and improvement of the DRL agent’s routing strategy, reward functions of different forms are designed by utilizing network link information and different penalty mechanisms. Additionally, importance sampling steps and gradient clipping methods are employed during gradient updating to enhance the convergence speed and stability of the designed intelligent routing method. Experimental results show that this solution outperforms traditional routing methods in network throughput, delay, packet loss rate, and wireless node distance. Compared to value-function-based Dueling Deep Q-Network (DQN) routing, the convergence of the proposed method is significantly faster and more stable. Simultaneously, hardware storage consumption is reduced, and real-time routing decisions can be made using the current network state information. The source code can be accessed at
https://github.com/GuetYe/DRL-PPONSA
.
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.