Ming Chen, Y. Hua, Xinyu Gu, Shiwen Nie, Zhiqiang Fan
{"title":"A self-organizing resource allocation strategy based on Q-learning approach in ultra-dense networks","authors":"Ming Chen, Y. Hua, Xinyu Gu, Shiwen Nie, Zhiqiang Fan","doi":"10.1109/ICNIDC.2016.7974555","DOIUrl":null,"url":null,"abstract":"In ultra-dense heterogeneous cellular networks, with the density of low power base stations (BSs) increasing, the inter-cell interference (ICI) can be extremely strong when all BSs reuse the same time-frequency resources. In this paper, after proving that allocating orthogonal (frequency) sub-bands to adjacent cells can perform better on throughput than reusing the whole bandwidth, we propose a multi-agent Q-learning based resources allocation (QLRA) approach as an enhanced solution to maximize the system performance. For the QLRA, we operate two learning paradigms: the distributed Q-learning (DQL) algorithm and the centralized Q-learning (CQL) algorithm. In the DQL scenario, all small cells learn independently without sharing any information. While in the CQL scenario, interaction between different agents is taken into consideration and resources are scheduled in a centralized way. Simulation results show that both QLRA scenarios can study an ideal resource allocation strategy automatically and achieve better performance on system throughput. Moreover, by scheduling resources in a centralized way, the CQL scenario can improve the system throughput furtherly.","PeriodicalId":439987,"journal":{"name":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICNIDC.2016.7974555","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
In ultra-dense heterogeneous cellular networks, with the density of low power base stations (BSs) increasing, the inter-cell interference (ICI) can be extremely strong when all BSs reuse the same time-frequency resources. In this paper, after proving that allocating orthogonal (frequency) sub-bands to adjacent cells can perform better on throughput than reusing the whole bandwidth, we propose a multi-agent Q-learning based resources allocation (QLRA) approach as an enhanced solution to maximize the system performance. For the QLRA, we operate two learning paradigms: the distributed Q-learning (DQL) algorithm and the centralized Q-learning (CQL) algorithm. In the DQL scenario, all small cells learn independently without sharing any information. While in the CQL scenario, interaction between different agents is taken into consideration and resources are scheduled in a centralized way. Simulation results show that both QLRA scenarios can study an ideal resource allocation strategy automatically and achieve better performance on system throughput. Moreover, by scheduling resources in a centralized way, the CQL scenario can improve the system throughput furtherly.