Minsuk Choi, Kyungrae Kim, Hongjun Jang, Hyokyung Woo, Joan S. Pujol Roig, Yue Wang, Hunje Yeon, Sunghyun Choi, Seowoo Jang
{"title":"Cell On/Off Parameter Optimization for Saving Energy via Reinforcement Learning","authors":"Minsuk Choi, Kyungrae Kim, Hongjun Jang, Hyokyung Woo, Joan S. Pujol Roig, Yue Wang, Hunje Yeon, Sunghyun Choi, Seowoo Jang","doi":"10.1109/GCWkshps52748.2021.9682160","DOIUrl":null,"url":null,"abstract":"Energy cost accounts for a large portion of expenses when operating a cellular mobile network, and it is expected to increase further to support advanced communication features and more base stations as the network evolves. In this work, we address energy saving by turning off cells while minimizing the impact on the performance of the network. However, the challenge here is to optimally and safely manage a cell on/off operation depending on the states of and demands to the networks as they may vary in a wide range in general. In order to be adaptive to the circumstances that a base station is posed and to requirements from its service provider, reinforcement learning-based approaches are used in this work to train personalized or customized neural policies, which operate the cell on/off algorithm. It is shown with a replicative simulator, which has the capability to reproduce the states and behaviors of real RANs (Radio Access Network) using the real data extracted from them, that our approach achieves maximum gain in energy saving while satisfying given constraints on the performance. Also, we propose a couple of operational modes to balance between the performance of energy saving and the cost for running the solution. Through training and evaluation on simple yet demonstrative scenarios, we demonstrate that our approach provides customized solutions and propose various operational options that a service provider can choose from.","PeriodicalId":6802,"journal":{"name":"2021 IEEE Globecom Workshops (GC Wkshps)","volume":"86 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Globecom Workshops (GC Wkshps)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GCWkshps52748.2021.9682160","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Energy cost accounts for a large portion of expenses when operating a cellular mobile network, and it is expected to increase further to support advanced communication features and more base stations as the network evolves. In this work, we address energy saving by turning off cells while minimizing the impact on the performance of the network. However, the challenge here is to optimally and safely manage a cell on/off operation depending on the states of and demands to the networks as they may vary in a wide range in general. In order to be adaptive to the circumstances that a base station is posed and to requirements from its service provider, reinforcement learning-based approaches are used in this work to train personalized or customized neural policies, which operate the cell on/off algorithm. It is shown with a replicative simulator, which has the capability to reproduce the states and behaviors of real RANs (Radio Access Network) using the real data extracted from them, that our approach achieves maximum gain in energy saving while satisfying given constraints on the performance. Also, we propose a couple of operational modes to balance between the performance of energy saving and the cost for running the solution. Through training and evaluation on simple yet demonstrative scenarios, we demonstrate that our approach provides customized solutions and propose various operational options that a service provider can choose from.