Wei Zhou , Xing Jiang , Qingsong Luo , Bingli Guo , Xiang Sun , Fengyuan Sun , Lingyu Meng
{"title":"AQROM:软件定义网络中一种基于异步优因子-评论家的服务质量感知路由优化机制","authors":"Wei Zhou , Xing Jiang , Qingsong Luo , Bingli Guo , Xiang Sun , Fengyuan Sun , Lingyu Meng","doi":"10.1016/j.dcan.2022.11.016","DOIUrl":null,"url":null,"abstract":"<div><div>In Software-Defined Networks (SDNs), determining how to efficiently achieve Quality of Service (QoS)-aware routing is challenging but critical for significantly improving the performance of a network, where the metrics of QoS can be defined as, for example, average latency, packet loss ratio, and throughput. The SDN controller can use network statistics and a Deep Reinforcement Learning (DRL) method to resolve this challenge. In this paper, we formulate dynamic routing in an SDN as a Markov decision process and propose a DRL algorithm called the Asynchronous Advantage Actor-Critic QoS-aware Routing Optimization Mechanism (AQROM) to determine routing strategies that balance the traffic loads in the network. AQROM can improve the QoS of the network and reduce the training time via dynamic routing strategy updates; that is, the reward function can be dynamically and promptly altered based on the optimization objective regardless of the network topology and traffic pattern. AQROM can be considered as one-step optimization and a black-box routing mechanism in high-dimensional input and output sets for both discrete and continuous states, and actions with respect to the operations in the SDN. Extensive simulations were conducted using OMNeT++ and the results demonstrated that AQROM 1) achieved much faster and stable convergence than the Deep Deterministic Policy Gradient (DDPG) and Advantage Actor-Critic (A2C), 2) incurred a lower packet loss ratio and latency than Open Shortest Path First (OSPF), DDPG, and A2C, and 3) resulted in higher and more stable throughput than OSPF, DDPG, and A2C.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 5","pages":"Pages 1405-1414"},"PeriodicalIF":7.5000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AQROM: A quality of service aware routing optimization mechanism based on asynchronous advantage actor-critic in software-defined networks\",\"authors\":\"Wei Zhou , Xing Jiang , Qingsong Luo , Bingli Guo , Xiang Sun , Fengyuan Sun , Lingyu Meng\",\"doi\":\"10.1016/j.dcan.2022.11.016\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In Software-Defined Networks (SDNs), determining how to efficiently achieve Quality of Service (QoS)-aware routing is challenging but critical for significantly improving the performance of a network, where the metrics of QoS can be defined as, for example, average latency, packet loss ratio, and throughput. The SDN controller can use network statistics and a Deep Reinforcement Learning (DRL) method to resolve this challenge. In this paper, we formulate dynamic routing in an SDN as a Markov decision process and propose a DRL algorithm called the Asynchronous Advantage Actor-Critic QoS-aware Routing Optimization Mechanism (AQROM) to determine routing strategies that balance the traffic loads in the network. AQROM can improve the QoS of the network and reduce the training time via dynamic routing strategy updates; that is, the reward function can be dynamically and promptly altered based on the optimization objective regardless of the network topology and traffic pattern. AQROM can be considered as one-step optimization and a black-box routing mechanism in high-dimensional input and output sets for both discrete and continuous states, and actions with respect to the operations in the SDN. Extensive simulations were conducted using OMNeT++ and the results demonstrated that AQROM 1) achieved much faster and stable convergence than the Deep Deterministic Policy Gradient (DDPG) and Advantage Actor-Critic (A2C), 2) incurred a lower packet loss ratio and latency than Open Shortest Path First (OSPF), DDPG, and A2C, and 3) resulted in higher and more stable throughput than OSPF, DDPG, and A2C.</div></div>\",\"PeriodicalId\":48631,\"journal\":{\"name\":\"Digital Communications and Networks\",\"volume\":\"10 5\",\"pages\":\"Pages 1405-1414\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Communications and Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2352864822002577\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Communications and Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352864822002577","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
AQROM: A quality of service aware routing optimization mechanism based on asynchronous advantage actor-critic in software-defined networks
In Software-Defined Networks (SDNs), determining how to efficiently achieve Quality of Service (QoS)-aware routing is challenging but critical for significantly improving the performance of a network, where the metrics of QoS can be defined as, for example, average latency, packet loss ratio, and throughput. The SDN controller can use network statistics and a Deep Reinforcement Learning (DRL) method to resolve this challenge. In this paper, we formulate dynamic routing in an SDN as a Markov decision process and propose a DRL algorithm called the Asynchronous Advantage Actor-Critic QoS-aware Routing Optimization Mechanism (AQROM) to determine routing strategies that balance the traffic loads in the network. AQROM can improve the QoS of the network and reduce the training time via dynamic routing strategy updates; that is, the reward function can be dynamically and promptly altered based on the optimization objective regardless of the network topology and traffic pattern. AQROM can be considered as one-step optimization and a black-box routing mechanism in high-dimensional input and output sets for both discrete and continuous states, and actions with respect to the operations in the SDN. Extensive simulations were conducted using OMNeT++ and the results demonstrated that AQROM 1) achieved much faster and stable convergence than the Deep Deterministic Policy Gradient (DDPG) and Advantage Actor-Critic (A2C), 2) incurred a lower packet loss ratio and latency than Open Shortest Path First (OSPF), DDPG, and A2C, and 3) resulted in higher and more stable throughput than OSPF, DDPG, and A2C.
期刊介绍:
Digital Communications and Networks is a prestigious journal that emphasizes on communication systems and networks. We publish only top-notch original articles and authoritative reviews, which undergo rigorous peer-review. We are proud to announce that all our articles are fully Open Access and can be accessed on ScienceDirect. Our journal is recognized and indexed by eminent databases such as the Science Citation Index Expanded (SCIE) and Scopus.
In addition to regular articles, we may also consider exceptional conference papers that have been significantly expanded. Furthermore, we periodically release special issues that focus on specific aspects of the field.
In conclusion, Digital Communications and Networks is a leading journal that guarantees exceptional quality and accessibility for researchers and scholars in the field of communication systems and networks.