Paul Almasan, Jos'e Su'arez-Varela, Bo-Xi Wu, Shihan Xiao, P. Barlet-Ros, A. Cabellos-Aparicio
{"title":"Towards Real-Time Routing Optimization with Deep Reinforcement Learning: Open Challenges","authors":"Paul Almasan, Jos'e Su'arez-Varela, Bo-Xi Wu, Shihan Xiao, P. Barlet-Ros, A. Cabellos-Aparicio","doi":"10.1109/HPSR52026.2021.9481864","DOIUrl":null,"url":null,"abstract":"The digital transformation is pushing the existing network technologies towards new horizons, enabling new applications (e.g., vehicular networks). As a result, the networking community has seen a noticeable increase in the requirements of emerging network applications. One main open challenge is the need to accommodate control systems to highly dynamic network scenarios. Nowadays, existing network optimization technologies do not meet the needed requirements to effectively operate in real time. Some of them are based on hand-crafted heuristics with limited performance and adaptability, while some technologies use optimizers which are often too time-consuming. Recent advances in Deep Reinforcement Learning (DRL) have shown a dramatic improvement in decision-making and automated control problems. Consequently, DRL represents a promising technique to efficiently solve a variety of relevant network optimization problems, such as online routing. In this paper, we explore the use of state-of-the-art DRL technologies for real-time routing optimization and outline some relevant open challenges to achieve production-ready DRL-based solutions.","PeriodicalId":158580,"journal":{"name":"2021 IEEE 22nd International Conference on High Performance Switching and Routing (HPSR)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 22nd International Conference on High Performance Switching and Routing (HPSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPSR52026.2021.9481864","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The digital transformation is pushing the existing network technologies towards new horizons, enabling new applications (e.g., vehicular networks). As a result, the networking community has seen a noticeable increase in the requirements of emerging network applications. One main open challenge is the need to accommodate control systems to highly dynamic network scenarios. Nowadays, existing network optimization technologies do not meet the needed requirements to effectively operate in real time. Some of them are based on hand-crafted heuristics with limited performance and adaptability, while some technologies use optimizers which are often too time-consuming. Recent advances in Deep Reinforcement Learning (DRL) have shown a dramatic improvement in decision-making and automated control problems. Consequently, DRL represents a promising technique to efficiently solve a variety of relevant network optimization problems, such as online routing. In this paper, we explore the use of state-of-the-art DRL technologies for real-time routing optimization and outline some relevant open challenges to achieve production-ready DRL-based solutions.