Di Wu, Jikun Kang, Yi Tian Xu, Hang Li, Jimmy Li, Xi Chen, D. Rivkin, Michael Jenkin, Taeseop Lee, Intaik Park, Xue Liu, Gregory Dudek
{"title":"基于数据高效深度强化学习的通信网络负载平衡","authors":"Di Wu, Jikun Kang, Yi Tian Xu, Hang Li, Jimmy Li, Xi Chen, D. Rivkin, Michael Jenkin, Taeseop Lee, Intaik Park, Xue Liu, Gregory Dudek","doi":"10.1109/GLOBECOM46510.2021.9685294","DOIUrl":null,"url":null,"abstract":"Within a cellular network, load balancing between different cells is of critical importance to network performance and quality of service. Most existing load balancing algorithms are manually designed and tuned rule-based methods where near-optimality is almost impossible to achieve. These rule-based meth-ods are difficult to adapt quickly to traffic changes in real-world environments. Given the success of Reinforcement Learning (RL) algorithms in many application domains, there have been a number of efforts to tackle load balancing for communication systems using RL-based methods. To our knowledge, none of these efforts have addressed the need for data efficiency within the RL framework, which is one of the main obstacles in applying RL to wireless network load balancing. In this paper, we formulate the communication load balancing problem as a Markov Decision Process and propose a data-efficient transfer deep reinforcement learning algorithm to address it. Experimental results show that the proposed method can significantly improve the system performance over other baselines and is more robust to environmental changes.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Load Balancing for Communication Networks via Data-Efficient Deep Reinforcement Learning\",\"authors\":\"Di Wu, Jikun Kang, Yi Tian Xu, Hang Li, Jimmy Li, Xi Chen, D. Rivkin, Michael Jenkin, Taeseop Lee, Intaik Park, Xue Liu, Gregory Dudek\",\"doi\":\"10.1109/GLOBECOM46510.2021.9685294\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Within a cellular network, load balancing between different cells is of critical importance to network performance and quality of service. Most existing load balancing algorithms are manually designed and tuned rule-based methods where near-optimality is almost impossible to achieve. These rule-based meth-ods are difficult to adapt quickly to traffic changes in real-world environments. Given the success of Reinforcement Learning (RL) algorithms in many application domains, there have been a number of efforts to tackle load balancing for communication systems using RL-based methods. To our knowledge, none of these efforts have addressed the need for data efficiency within the RL framework, which is one of the main obstacles in applying RL to wireless network load balancing. In this paper, we formulate the communication load balancing problem as a Markov Decision Process and propose a data-efficient transfer deep reinforcement learning algorithm to address it. Experimental results show that the proposed method can significantly improve the system performance over other baselines and is more robust to environmental changes.\",\"PeriodicalId\":200641,\"journal\":{\"name\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM46510.2021.9685294\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9685294","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Load Balancing for Communication Networks via Data-Efficient Deep Reinforcement Learning
Within a cellular network, load balancing between different cells is of critical importance to network performance and quality of service. Most existing load balancing algorithms are manually designed and tuned rule-based methods where near-optimality is almost impossible to achieve. These rule-based meth-ods are difficult to adapt quickly to traffic changes in real-world environments. Given the success of Reinforcement Learning (RL) algorithms in many application domains, there have been a number of efforts to tackle load balancing for communication systems using RL-based methods. To our knowledge, none of these efforts have addressed the need for data efficiency within the RL framework, which is one of the main obstacles in applying RL to wireless network load balancing. In this paper, we formulate the communication load balancing problem as a Markov Decision Process and propose a data-efficient transfer deep reinforcement learning algorithm to address it. Experimental results show that the proposed method can significantly improve the system performance over other baselines and is more robust to environmental changes.