Dynamic Channel Access and Power Control via Deep Reinforcement Learning

Ziyang Lu, M. C. Gursoy
{"title":"Dynamic Channel Access and Power Control via Deep Reinforcement Learning","authors":"Ziyang Lu, M. C. Gursoy","doi":"10.1109/VTCFall.2019.8891391","DOIUrl":null,"url":null,"abstract":"Efficient use of spectral and energy resources is critical in wireless networks and has been extensively studied in recent years. In particular, dynamic spectrum access and power control have been addressed primarily via optimization and game-theoretic tools. In this paper, motivated by recent advances in machine learning and, more specifically, the success of reinforcement learning for addressing dynamic control problems, we consider deep reinforcement learning to jointly perform dynamic channel access and power control in wireless interference channels. We propose a deep Q-learning model, develop an algorithm, and evaluate the performance considering different utilities and reward mechanisms. We provide comparisons with the optimal centralized strategies that require complete information and use weighted minimum mean square error (WMMSE) based power control and exhaustive search over all channel selection policies. We highlight the performance improvements with power control.","PeriodicalId":6713,"journal":{"name":"2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)","volume":"88 1","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VTCFall.2019.8891391","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

Abstract

Efficient use of spectral and energy resources is critical in wireless networks and has been extensively studied in recent years. In particular, dynamic spectrum access and power control have been addressed primarily via optimization and game-theoretic tools. In this paper, motivated by recent advances in machine learning and, more specifically, the success of reinforcement learning for addressing dynamic control problems, we consider deep reinforcement learning to jointly perform dynamic channel access and power control in wireless interference channels. We propose a deep Q-learning model, develop an algorithm, and evaluate the performance considering different utilities and reward mechanisms. We provide comparisons with the optimal centralized strategies that require complete information and use weighted minimum mean square error (WMMSE) based power control and exhaustive search over all channel selection policies. We highlight the performance improvements with power control.
基于深度强化学习的动态通道访问和功率控制
频谱和能量资源的有效利用是无线网络的关键,近年来得到了广泛的研究。特别是,动态频谱接入和功率控制主要通过优化和博弈论工具来解决。在本文中,由于机器学习的最新进展,更具体地说,是强化学习解决动态控制问题的成功,我们考虑深度强化学习在无线干扰信道中联合执行动态信道访问和功率控制。我们提出了一个深度q -学习模型,开发了一个算法,并在考虑不同效用和奖励机制的情况下评估了性能。我们提供了与需要完整信息的最优集中式策略的比较,并使用基于加权最小均方误差(WMMSE)的功率控制和对所有信道选择策略的穷列搜索。我们强调了功率控制方面的性能改进。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信