{"title":"Q-Learning Based Power Allocation in Self Organizing Heterogeneous Networks","authors":"J. V. Naidu, Subhajit Mukherjee, Aneek Adhya","doi":"10.1109/ICIERA53202.2021.9726719","DOIUrl":null,"url":null,"abstract":"Resource allocation and interference management have been fundamental topics of interest in wireless cellular networks. Low powered small cells can be deployed by indoor users to diminish cellular coverage problem, offload traffic from the macrocell based cellular network, and enhance over-all user throughput. In order to facilitate extensive usage of small cells coexisting with the macrocells for next generation cellular networks, new interference management schemes are required. We consider a two-tier heterogeneous cellular network with conventional macrocells overlaid with femtocells. A multi agent Markov decision process based distributed framework is proposed to model resource allocation in the cellular network. We explore a reinforcement learning (RL), in particular, a Q-learning based self organizing mechanism for power allocation that enables adaptation of transmit power as new femtocells are being added to the network. By mitigating the co-tier and cross-tier interference simultaneously, the proposed technique maximizes the sum capacity of femtocell network, even though it maintains the quality of service (QoS), represented in terms of the transmission rate requirement, of all the femtocell user equipment (FUEs) and macrocell user equipment (MUE).","PeriodicalId":220461,"journal":{"name":"2021 International Conference on Industrial Electronics Research and Applications (ICIERA)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Industrial Electronics Research and Applications (ICIERA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIERA53202.2021.9726719","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Resource allocation and interference management have been fundamental topics of interest in wireless cellular networks. Low powered small cells can be deployed by indoor users to diminish cellular coverage problem, offload traffic from the macrocell based cellular network, and enhance over-all user throughput. In order to facilitate extensive usage of small cells coexisting with the macrocells for next generation cellular networks, new interference management schemes are required. We consider a two-tier heterogeneous cellular network with conventional macrocells overlaid with femtocells. A multi agent Markov decision process based distributed framework is proposed to model resource allocation in the cellular network. We explore a reinforcement learning (RL), in particular, a Q-learning based self organizing mechanism for power allocation that enables adaptation of transmit power as new femtocells are being added to the network. By mitigating the co-tier and cross-tier interference simultaneously, the proposed technique maximizes the sum capacity of femtocell network, even though it maintains the quality of service (QoS), represented in terms of the transmission rate requirement, of all the femtocell user equipment (FUEs) and macrocell user equipment (MUE).