Rehenuma Tasnim Rodoshi, Taewoon Kim, Wooyeol Choi
{"title":"Deep Reinforcement Learning Based Dynamic Resource Allocation in Cloud Radio Access Networks","authors":"Rehenuma Tasnim Rodoshi, Taewoon Kim, Wooyeol Choi","doi":"10.1109/ICTC49870.2020.9289530","DOIUrl":null,"url":null,"abstract":"Cloud radio access network (C-RAN) is a promising architecture to fulfill the ever-increasing resource demand in telecommunication networks. In C-RAN, a base station is decoupled into baseband unit (BBU) and remote radio head (RRH). The BBUs are further centralized and virtualized as virtual machines (VMs) inside a BBU pool. This architecture can meet the massively increasing cellular data traffic demand. However, resource management in C-RAN needs to be designed carefully in order to reach the objectives of energy saving and to meet the user demand over a long operational period. Since the user demands are highly dynamic in different times and locations, it is challenging to perform the optimal resource management. In this paper, we exploit a deep reinforcement learning (DRL) model to learn the spatial and temporal user demand in C-RAN, and propose an algorithm that resizes the VMs to allocate computational resources inside the BBU pool. The computational resource allocation is done according to the amount of required resources in the associated RRHs of the VMs. Through an extensive evaluation study, we show that the proposed algorithm can make the C-RAN network resource-efficiency while satisfying dynamic user demand.","PeriodicalId":282243,"journal":{"name":"2020 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Information and Communication Technology Convergence (ICTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTC49870.2020.9289530","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Cloud radio access network (C-RAN) is a promising architecture to fulfill the ever-increasing resource demand in telecommunication networks. In C-RAN, a base station is decoupled into baseband unit (BBU) and remote radio head (RRH). The BBUs are further centralized and virtualized as virtual machines (VMs) inside a BBU pool. This architecture can meet the massively increasing cellular data traffic demand. However, resource management in C-RAN needs to be designed carefully in order to reach the objectives of energy saving and to meet the user demand over a long operational period. Since the user demands are highly dynamic in different times and locations, it is challenging to perform the optimal resource management. In this paper, we exploit a deep reinforcement learning (DRL) model to learn the spatial and temporal user demand in C-RAN, and propose an algorithm that resizes the VMs to allocate computational resources inside the BBU pool. The computational resource allocation is done according to the amount of required resources in the associated RRHs of the VMs. Through an extensive evaluation study, we show that the proposed algorithm can make the C-RAN network resource-efficiency while satisfying dynamic user demand.