Joint AMC and Resource Allocation for Mobile Wireless Networks Based on Distributed MARL

Yin-Hwa Huang, Zhaoyang Zhang, Jue Wang, Chongwen Huang, C. Zhong
{"title":"Joint AMC and Resource Allocation for Mobile Wireless Networks Based on Distributed MARL","authors":"Yin-Hwa Huang, Zhaoyang Zhang, Jue Wang, Chongwen Huang, C. Zhong","doi":"10.1109/iccworkshops53468.2022.9814688","DOIUrl":null,"url":null,"abstract":"With the rapid development of intelligent devices, the fifth-generation (5G) mobile wireless networks are envisioned to support massive connections and higher capacity. To confront challenges on link inefficiency in traditional mobile wireless networks, the link adaptation technology is crucial for system capacity improvements and requires coordination with resource allocation strategy. In this paper, we consider a joint adaptive modulation and coding (AMC) and resource allocation (RA) in a wireless network, where multiple users share limited subcarriers and adaptively change modulation levels and transmit power with the target to maximize the long-term system throughput. Instead of using optimization theory-based methods with higher complexity, we propose an intelligent double deep Q-network (DDQN)-based AMC and RA algorithm, which regards users as agents that learn cooperatively from their past experiences and implement their policies distributively. Furthermore, to guarantee fairness among users, we re-design the multi-agent reinforcement learning (MARL) reward function to incorporate the attained proportional fairness of each user at the current cycle into our objective. Simulation results demonstrate that users successfully learn to collaborate in a distributed manner, which leads to improved throughput both of the single link level and the whole system level.","PeriodicalId":102261,"journal":{"name":"2022 IEEE International Conference on Communications Workshops (ICC Workshops)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Communications Workshops (ICC Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccworkshops53468.2022.9814688","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

With the rapid development of intelligent devices, the fifth-generation (5G) mobile wireless networks are envisioned to support massive connections and higher capacity. To confront challenges on link inefficiency in traditional mobile wireless networks, the link adaptation technology is crucial for system capacity improvements and requires coordination with resource allocation strategy. In this paper, we consider a joint adaptive modulation and coding (AMC) and resource allocation (RA) in a wireless network, where multiple users share limited subcarriers and adaptively change modulation levels and transmit power with the target to maximize the long-term system throughput. Instead of using optimization theory-based methods with higher complexity, we propose an intelligent double deep Q-network (DDQN)-based AMC and RA algorithm, which regards users as agents that learn cooperatively from their past experiences and implement their policies distributively. Furthermore, to guarantee fairness among users, we re-design the multi-agent reinforcement learning (MARL) reward function to incorporate the attained proportional fairness of each user at the current cycle into our objective. Simulation results demonstrate that users successfully learn to collaborate in a distributed manner, which leads to improved throughput both of the single link level and the whole system level.
基于分布式MARL的移动无线网络联合AMC与资源分配
随着智能设备的快速发展,第五代(5G)移动无线网络将支持大量连接和更高的容量。针对传统移动无线网络中链路效率低下的问题,链路自适应技术是提高系统容量的关键,需要与资源分配策略相协调。在无线网络中,我们考虑了一种联合自适应调制与编码(AMC)和资源分配(RA),其中多个用户共享有限的子载波,并自适应地与目标改变调制电平和发射功率,以最大化系统的长期吞吐量。本文提出了一种基于智能双深度q网络(DDQN)的AMC和RA算法,该算法将用户视为从过去的经验中合作学习并分布式执行策略的智能体,而不是使用复杂度较高的基于优化理论的方法。此外,为了保证用户之间的公平性,我们重新设计了多智能体强化学习(MARL)奖励函数,将每个用户在当前周期中获得的比例公平性纳入我们的目标。仿真结果表明,用户成功地学会了以分布式方式进行协作,从而提高了单链路级和整个系统级的吞吐量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信