基于强化学习节能的细胞开/关参数优化

Minsuk Choi, Kyungrae Kim, Hongjun Jang, Hyokyung Woo, Joan S. Pujol Roig, Yue Wang, Hunje Yeon, Sunghyun Choi, Seowoo Jang
{"title":"基于强化学习节能的细胞开/关参数优化","authors":"Minsuk Choi, Kyungrae Kim, Hongjun Jang, Hyokyung Woo, Joan S. Pujol Roig, Yue Wang, Hunje Yeon, Sunghyun Choi, Seowoo Jang","doi":"10.1109/GCWkshps52748.2021.9682160","DOIUrl":null,"url":null,"abstract":"Energy cost accounts for a large portion of expenses when operating a cellular mobile network, and it is expected to increase further to support advanced communication features and more base stations as the network evolves. In this work, we address energy saving by turning off cells while minimizing the impact on the performance of the network. However, the challenge here is to optimally and safely manage a cell on/off operation depending on the states of and demands to the networks as they may vary in a wide range in general. In order to be adaptive to the circumstances that a base station is posed and to requirements from its service provider, reinforcement learning-based approaches are used in this work to train personalized or customized neural policies, which operate the cell on/off algorithm. It is shown with a replicative simulator, which has the capability to reproduce the states and behaviors of real RANs (Radio Access Network) using the real data extracted from them, that our approach achieves maximum gain in energy saving while satisfying given constraints on the performance. Also, we propose a couple of operational modes to balance between the performance of energy saving and the cost for running the solution. Through training and evaluation on simple yet demonstrative scenarios, we demonstrate that our approach provides customized solutions and propose various operational options that a service provider can choose from.","PeriodicalId":6802,"journal":{"name":"2021 IEEE Globecom Workshops (GC Wkshps)","volume":"86 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Cell On/Off Parameter Optimization for Saving Energy via Reinforcement Learning\",\"authors\":\"Minsuk Choi, Kyungrae Kim, Hongjun Jang, Hyokyung Woo, Joan S. Pujol Roig, Yue Wang, Hunje Yeon, Sunghyun Choi, Seowoo Jang\",\"doi\":\"10.1109/GCWkshps52748.2021.9682160\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Energy cost accounts for a large portion of expenses when operating a cellular mobile network, and it is expected to increase further to support advanced communication features and more base stations as the network evolves. In this work, we address energy saving by turning off cells while minimizing the impact on the performance of the network. However, the challenge here is to optimally and safely manage a cell on/off operation depending on the states of and demands to the networks as they may vary in a wide range in general. In order to be adaptive to the circumstances that a base station is posed and to requirements from its service provider, reinforcement learning-based approaches are used in this work to train personalized or customized neural policies, which operate the cell on/off algorithm. It is shown with a replicative simulator, which has the capability to reproduce the states and behaviors of real RANs (Radio Access Network) using the real data extracted from them, that our approach achieves maximum gain in energy saving while satisfying given constraints on the performance. Also, we propose a couple of operational modes to balance between the performance of energy saving and the cost for running the solution. Through training and evaluation on simple yet demonstrative scenarios, we demonstrate that our approach provides customized solutions and propose various operational options that a service provider can choose from.\",\"PeriodicalId\":6802,\"journal\":{\"name\":\"2021 IEEE Globecom Workshops (GC Wkshps)\",\"volume\":\"86 1\",\"pages\":\"1-6\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Globecom Workshops (GC Wkshps)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GCWkshps52748.2021.9682160\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Globecom Workshops (GC Wkshps)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GCWkshps52748.2021.9682160","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

能源成本占蜂窝移动网络运营费用的很大一部分,预计随着网络的发展,能源成本将进一步增加,以支持先进的通信功能和更多的基站。在这项工作中,我们通过关闭单元来解决节能问题,同时最大限度地减少对网络性能的影响。然而,这里的挑战是根据网络的状态和需求来优化和安全地管理单元的开/关操作,因为它们通常可能在很大范围内变化。为了适应基站所处的环境和服务提供商的要求,在这项工作中使用了基于强化学习的方法来训练个性化或定制的神经策略,这些策略操作单元开/关算法。仿真结果表明,该方法在满足给定性能约束的前提下,实现了最大的节能效果。该仿真器能够利用从无线接入网中提取的真实数据再现真实无线接入网的状态和行为。此外,我们还提出了几种操作模式,以便在节能性能和运行解决方案的成本之间取得平衡。通过对简单但具有示范性的场景的培训和评估,我们证明了我们的方法提供了定制的解决方案,并提出了服务提供商可以选择的各种操作选项。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Cell On/Off Parameter Optimization for Saving Energy via Reinforcement Learning
Energy cost accounts for a large portion of expenses when operating a cellular mobile network, and it is expected to increase further to support advanced communication features and more base stations as the network evolves. In this work, we address energy saving by turning off cells while minimizing the impact on the performance of the network. However, the challenge here is to optimally and safely manage a cell on/off operation depending on the states of and demands to the networks as they may vary in a wide range in general. In order to be adaptive to the circumstances that a base station is posed and to requirements from its service provider, reinforcement learning-based approaches are used in this work to train personalized or customized neural policies, which operate the cell on/off algorithm. It is shown with a replicative simulator, which has the capability to reproduce the states and behaviors of real RANs (Radio Access Network) using the real data extracted from them, that our approach achieves maximum gain in energy saving while satisfying given constraints on the performance. Also, we propose a couple of operational modes to balance between the performance of energy saving and the cost for running the solution. Through training and evaluation on simple yet demonstrative scenarios, we demonstrate that our approach provides customized solutions and propose various operational options that a service provider can choose from.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信