Faizul Haq, Adeeba Naaz, T. V. P. K. Bantupalli, Kotaro Kataoka
{"title":"DRL-FTO:基于深度强化学习的SDN动态流规则超时优化","authors":"Faizul Haq, Adeeba Naaz, T. V. P. K. Bantupalli, Kotaro Kataoka","doi":"10.1145/3497777.3498549","DOIUrl":null,"url":null,"abstract":"Optimization of flow rule timeouts promises to reduce the frequency of message exchange between the SDN controller and the switches and contributes to the reduction of the controller load. However, such optimization is challenging due to the dynamically changing traffic patterns. Many algorithm-based solutions are based on the estimation of flow duration. However, such estimation approaches cannot achieve as good results as learning through observation, the actual attempt to optimize the timeout, and evaluating such actions in the network. This paper proposes “DRL-FTO”, a Deep Reinforcement Learning based approach to optimize the flow rule timeouts so that the number of message exchanges between the SDN controller and switches is minimized even though the characteristics of incoming traffic dynamically changes. We developed the proof of concept implementation of DRL-FTO and evaluated using the synthesized Internet traffic in Mininet environment with Ryu SDN controller. The evaluation results exhibited that DRL-FTO reduces the message exchange without compromising the throughput in the data plane, and, as a positive consequence, the SDN controller load can also be reduced.","PeriodicalId":248679,"journal":{"name":"Proceedings of the 16th Asian Internet Engineering Conference","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"DRL-FTO: Dynamic Flow Rule Timeout Optimization in SDN using Deep Reinforcement Learning\",\"authors\":\"Faizul Haq, Adeeba Naaz, T. V. P. K. Bantupalli, Kotaro Kataoka\",\"doi\":\"10.1145/3497777.3498549\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Optimization of flow rule timeouts promises to reduce the frequency of message exchange between the SDN controller and the switches and contributes to the reduction of the controller load. However, such optimization is challenging due to the dynamically changing traffic patterns. Many algorithm-based solutions are based on the estimation of flow duration. However, such estimation approaches cannot achieve as good results as learning through observation, the actual attempt to optimize the timeout, and evaluating such actions in the network. This paper proposes “DRL-FTO”, a Deep Reinforcement Learning based approach to optimize the flow rule timeouts so that the number of message exchanges between the SDN controller and switches is minimized even though the characteristics of incoming traffic dynamically changes. We developed the proof of concept implementation of DRL-FTO and evaluated using the synthesized Internet traffic in Mininet environment with Ryu SDN controller. The evaluation results exhibited that DRL-FTO reduces the message exchange without compromising the throughput in the data plane, and, as a positive consequence, the SDN controller load can also be reduced.\",\"PeriodicalId\":248679,\"journal\":{\"name\":\"Proceedings of the 16th Asian Internet Engineering Conference\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 16th Asian Internet Engineering Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3497777.3498549\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 16th Asian Internet Engineering Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3497777.3498549","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DRL-FTO: Dynamic Flow Rule Timeout Optimization in SDN using Deep Reinforcement Learning
Optimization of flow rule timeouts promises to reduce the frequency of message exchange between the SDN controller and the switches and contributes to the reduction of the controller load. However, such optimization is challenging due to the dynamically changing traffic patterns. Many algorithm-based solutions are based on the estimation of flow duration. However, such estimation approaches cannot achieve as good results as learning through observation, the actual attempt to optimize the timeout, and evaluating such actions in the network. This paper proposes “DRL-FTO”, a Deep Reinforcement Learning based approach to optimize the flow rule timeouts so that the number of message exchanges between the SDN controller and switches is minimized even though the characteristics of incoming traffic dynamically changes. We developed the proof of concept implementation of DRL-FTO and evaluated using the synthesized Internet traffic in Mininet environment with Ryu SDN controller. The evaluation results exhibited that DRL-FTO reduces the message exchange without compromising the throughput in the data plane, and, as a positive consequence, the SDN controller load can also be reduced.