Kareem M. Attiah, Karim A. Banawan, Ayman Gaber, A. Elezabi, Karim G. Seddik, Y. Gadallah, Kareem Abdullah
{"title":"Load Balancing in Cellular Networks: A Reinforcement Learning Approach","authors":"Kareem M. Attiah, Karim A. Banawan, Ayman Gaber, A. Elezabi, Karim G. Seddik, Y. Gadallah, Kareem Abdullah","doi":"10.1109/CCNC46108.2020.9045699","DOIUrl":null,"url":null,"abstract":"Balancing traffic among network installed radio base stations is one of the main challenges facing mobile operators because of the unhomogeneous geographical distribution of mobile subscribers in addition to practical and environmental limitations preventing acquiring the best locations to build radio sites. This increases the challenge of satisfying the increasing data speed demand for smartphone users. In this paper, we present a reinforcement learning framework for optimizing neighbor cell relational parameters that can better balance the traffic between different cells within a defined geographical cluster. We present a comprehensive design of the learning framework that includes key system performance indicators and the design of a general reward function. System level simulations show that reinforcement learning based optimization for neighbor cell borders can significantly improve overall system performance; in particular, with a reward function defined as throughput, an improvement up to 50% is achieved.","PeriodicalId":443862,"journal":{"name":"2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCNC46108.2020.9045699","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
Balancing traffic among network installed radio base stations is one of the main challenges facing mobile operators because of the unhomogeneous geographical distribution of mobile subscribers in addition to practical and environmental limitations preventing acquiring the best locations to build radio sites. This increases the challenge of satisfying the increasing data speed demand for smartphone users. In this paper, we present a reinforcement learning framework for optimizing neighbor cell relational parameters that can better balance the traffic between different cells within a defined geographical cluster. We present a comprehensive design of the learning framework that includes key system performance indicators and the design of a general reward function. System level simulations show that reinforcement learning based optimization for neighbor cell borders can significantly improve overall system performance; in particular, with a reward function defined as throughput, an improvement up to 50% is achieved.