Lei Zhang, Kewei Zhu, Junchen Pan, Hang Shi, Yong Jiang, Yong Cui
{"title":"Reinforcement Learning Based Congestion Control in a Real Environment","authors":"Lei Zhang, Kewei Zhu, Junchen Pan, Hang Shi, Yong Jiang, Yong Cui","doi":"10.1109/ICCCN49398.2020.9209750","DOIUrl":null,"url":null,"abstract":"Congestion control plays an important role in the Internet to handle real-world network traffic. It has been dominated by hand-crafted heuristics for decades. Recently, reinforcement learning shows great potentials to automatically learn optimal or near-optimal control policies to enhance the performance of congestion control. However, existing solutions train agents in either simulators or emulators, which cannot fully reflect the real-world environment and degrade the performance of network communication. In order to eliminate the performance degradation caused by training in the simulated environment, we first highlight the necessity and challenges to train a learningbased agent in real-world networks. Then we propose a framework, ARC, for learning congestion control policies in a real environment based on asynchronous execution and demonstrate its effectiveness in accelerating the training. We evaluate our scheme on the real testbed and compare it with state-of-the-art congestion control schemes. Experimental results demonstrate that our schemes can achieve higher throughput and lower latency in comparison with existing schemes.","PeriodicalId":137835,"journal":{"name":"2020 29th International Conference on Computer Communications and Networks (ICCCN)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th International Conference on Computer Communications and Networks (ICCCN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCN49398.2020.9209750","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Congestion control plays an important role in the Internet to handle real-world network traffic. It has been dominated by hand-crafted heuristics for decades. Recently, reinforcement learning shows great potentials to automatically learn optimal or near-optimal control policies to enhance the performance of congestion control. However, existing solutions train agents in either simulators or emulators, which cannot fully reflect the real-world environment and degrade the performance of network communication. In order to eliminate the performance degradation caused by training in the simulated environment, we first highlight the necessity and challenges to train a learningbased agent in real-world networks. Then we propose a framework, ARC, for learning congestion control policies in a real environment based on asynchronous execution and demonstrate its effectiveness in accelerating the training. We evaluate our scheme on the real testbed and compare it with state-of-the-art congestion control schemes. Experimental results demonstrate that our schemes can achieve higher throughput and lower latency in comparison with existing schemes.