M. Simsek, A. Czylwik, Ana Galindo-Serrano, L. Giupponi
{"title":"Improved decentralized Q-learning algorithm for interference reduction in LTE-femtocells","authors":"M. Simsek, A. Czylwik, Ana Galindo-Serrano, L. Giupponi","doi":"10.1109/WIAD.2011.5983301","DOIUrl":null,"url":null,"abstract":"Femtocells are receiving considerable interest in mobile communications as a strategy to overcome the indoor coverage problems as well as to improve the efficiency of current macrocell systems. Nevertheless, the detrimental factor in such networks is co-channel interference between macrocells and femtocells, as well as among neighboring femtocells which can dramatically decrease the overall capacity of the network. In this paper we propose a Reinforcement Learning (RL) framework, based on an improved decentralized Q-learning algorithm for femtocells sharing the macrocell spectrum. Since the major drawback of Q-learning is its slow convergence, we propose a smart initialization procedure. The proposed algorithm will be compared with a basic Q-learning algorithm and some power control (PC) algorithms from literature, e.g., fixed power allocation, received power based PC. The goal is to show the performance improvement and enhanced convergence.","PeriodicalId":125993,"journal":{"name":"2011 Wireless Advanced","volume":"1007 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"50","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 Wireless Advanced","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WIAD.2011.5983301","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 50
Abstract
Femtocells are receiving considerable interest in mobile communications as a strategy to overcome the indoor coverage problems as well as to improve the efficiency of current macrocell systems. Nevertheless, the detrimental factor in such networks is co-channel interference between macrocells and femtocells, as well as among neighboring femtocells which can dramatically decrease the overall capacity of the network. In this paper we propose a Reinforcement Learning (RL) framework, based on an improved decentralized Q-learning algorithm for femtocells sharing the macrocell spectrum. Since the major drawback of Q-learning is its slow convergence, we propose a smart initialization procedure. The proposed algorithm will be compared with a basic Q-learning algorithm and some power control (PC) algorithms from literature, e.g., fixed power allocation, received power based PC. The goal is to show the performance improvement and enhanced convergence.