{"title":"Reinforcement Learning for Random Access in Multi-cell Networks","authors":"Dongwook Lee, Yu Zhao, Joohyung Lee","doi":"10.1109/ICAIIC51459.2021.9415281","DOIUrl":null,"url":null,"abstract":"In this paper, our goal is to maximize the system throughput in a time-slotted uplink multi-cell random access communication system. To this end, we propose a two-stage reinforcement learning (RL)-based algorithm based on the exponential-weight algorithm for exploration and exploitation (EXP3). In each macro-time slot that consists of multiple time slots, users run the RL-based algorithm to choose the associated access point (AP). Then, a transmission policy determines the sub-time slot that user will transmit data in each time slot. Another RL-based learning algorithm is used to obtain an optimal transmission policy. To show that our method is efficient, we compare our proposed algorithm with the $\\epsilon$-greedy algorithm in two different scenarios. The simulation results show that the average system throughput of our algorithm outperforms that of $\\epsilon$-greedy exploration.","PeriodicalId":432977,"journal":{"name":"2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAIIC51459.2021.9415281","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, our goal is to maximize the system throughput in a time-slotted uplink multi-cell random access communication system. To this end, we propose a two-stage reinforcement learning (RL)-based algorithm based on the exponential-weight algorithm for exploration and exploitation (EXP3). In each macro-time slot that consists of multiple time slots, users run the RL-based algorithm to choose the associated access point (AP). Then, a transmission policy determines the sub-time slot that user will transmit data in each time slot. Another RL-based learning algorithm is used to obtain an optimal transmission policy. To show that our method is efficient, we compare our proposed algorithm with the $\epsilon$-greedy algorithm in two different scenarios. The simulation results show that the average system throughput of our algorithm outperforms that of $\epsilon$-greedy exploration.