面向上行NOMA系统的节能动态资源分配:单单元和多单元NOMA系统的深度强化学习

IF 7.1 2区 计算机科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Ayman Rabee;Imad Barhumi
{"title":"面向上行NOMA系统的节能动态资源分配:单单元和多单元NOMA系统的深度强化学习","authors":"Ayman Rabee;Imad Barhumi","doi":"10.1109/TVT.2025.3540940","DOIUrl":null,"url":null,"abstract":"Non-orthogonal multiple access (NOMA) is a key technology for future wireless networks, enabling improved spectral efficiency and massive device connectivity. However, optimizing power allocation, subchannel assignment, and cell association in multi-cell uplink NOMA systems is challenging due to user mobility and the NP-hard nature of the problem. This paper addresses these challenges by formulating the problem as a mixed-integer non-linear programming (MINLP) model to maximize energy efficiency (EE). We propose a deep reinforcement learning framework that employs deep Q-networks (DQN) for cell association and subchannel assignment, and twin delayed deep deterministic policy gradient (TD3) for power allocation. Simulation results reveal significant EE improvements, with multi-agent TD3 (MATD3) outperforming traditional Lagrange methods and multi-agent deep deterministic policy gradient (MADDPG). Furthermore, the proposed method exhibits robust adaptability to user mobility and superior performance in multi-cell environments, effectively mitigating inter-cell interference and enhancing resource allocation in dynamic scenarios.","PeriodicalId":13421,"journal":{"name":"IEEE Transactions on Vehicular Technology","volume":"74 6","pages":"9313-9327"},"PeriodicalIF":7.1000,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward Energy-Efficient Dynamic Resource Allocation in Uplink NOMA Systems: Deep Reinforcement Learning for Single and Multi-Cell NOMA Systems\",\"authors\":\"Ayman Rabee;Imad Barhumi\",\"doi\":\"10.1109/TVT.2025.3540940\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Non-orthogonal multiple access (NOMA) is a key technology for future wireless networks, enabling improved spectral efficiency and massive device connectivity. However, optimizing power allocation, subchannel assignment, and cell association in multi-cell uplink NOMA systems is challenging due to user mobility and the NP-hard nature of the problem. This paper addresses these challenges by formulating the problem as a mixed-integer non-linear programming (MINLP) model to maximize energy efficiency (EE). We propose a deep reinforcement learning framework that employs deep Q-networks (DQN) for cell association and subchannel assignment, and twin delayed deep deterministic policy gradient (TD3) for power allocation. Simulation results reveal significant EE improvements, with multi-agent TD3 (MATD3) outperforming traditional Lagrange methods and multi-agent deep deterministic policy gradient (MADDPG). Furthermore, the proposed method exhibits robust adaptability to user mobility and superior performance in multi-cell environments, effectively mitigating inter-cell interference and enhancing resource allocation in dynamic scenarios.\",\"PeriodicalId\":13421,\"journal\":{\"name\":\"IEEE Transactions on Vehicular Technology\",\"volume\":\"74 6\",\"pages\":\"9313-9327\"},\"PeriodicalIF\":7.1000,\"publicationDate\":\"2025-02-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Vehicular Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10882994/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Vehicular Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10882994/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

非正交多址(NOMA)是未来无线网络的一项关键技术,能够提高频谱效率和实现大规模设备连接。然而,由于用户的移动性和问题的NP-hard性质,优化多小区上行NOMA系统的功率分配、子信道分配和小区关联是具有挑战性的。本文通过将问题表述为混合整数非线性规划(MINLP)模型来解决这些挑战,以最大化能源效率(EE)。我们提出了一个深度强化学习框架,该框架使用深度q网络(DQN)进行单元关联和子信道分配,并使用双延迟深度确定性策略梯度(TD3)进行功率分配。仿真结果显示了显著的EE改进,多智能体TD3 (MATD3)优于传统的拉格朗日方法和多智能体深度确定性策略梯度(MADDPG)。此外,该方法对用户移动性具有较强的适应性,在多小区环境下具有优异的性能,可有效减轻小区间干扰,提高动态场景下的资源分配。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Toward Energy-Efficient Dynamic Resource Allocation in Uplink NOMA Systems: Deep Reinforcement Learning for Single and Multi-Cell NOMA Systems
Non-orthogonal multiple access (NOMA) is a key technology for future wireless networks, enabling improved spectral efficiency and massive device connectivity. However, optimizing power allocation, subchannel assignment, and cell association in multi-cell uplink NOMA systems is challenging due to user mobility and the NP-hard nature of the problem. This paper addresses these challenges by formulating the problem as a mixed-integer non-linear programming (MINLP) model to maximize energy efficiency (EE). We propose a deep reinforcement learning framework that employs deep Q-networks (DQN) for cell association and subchannel assignment, and twin delayed deep deterministic policy gradient (TD3) for power allocation. Simulation results reveal significant EE improvements, with multi-agent TD3 (MATD3) outperforming traditional Lagrange methods and multi-agent deep deterministic policy gradient (MADDPG). Furthermore, the proposed method exhibits robust adaptability to user mobility and superior performance in multi-cell environments, effectively mitigating inter-cell interference and enhancing resource allocation in dynamic scenarios.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.00
自引率
8.80%
发文量
1245
审稿时长
6.3 months
期刊介绍: The scope of the Transactions is threefold (which was approved by the IEEE Periodicals Committee in 1967) and is published on the journal website as follows: Communications: The use of mobile radio on land, sea, and air, including cellular radio, two-way radio, and one-way radio, with applications to dispatch and control vehicles, mobile radiotelephone, radio paging, and status monitoring and reporting. Related areas include spectrum usage, component radio equipment such as cavities and antennas, compute control for radio systems, digital modulation and transmission techniques, mobile radio circuit design, radio propagation for vehicular communications, effects of ignition noise and radio frequency interference, and consideration of the vehicle as part of the radio operating environment. Transportation Systems: The use of electronic technology for the control of ground transportation systems including, but not limited to, traffic aid systems; traffic control systems; automatic vehicle identification, location, and monitoring systems; automated transport systems, with single and multiple vehicle control; and moving walkways or people-movers. Vehicular Electronics: The use of electronic or electrical components and systems for control, propulsion, or auxiliary functions, including but not limited to, electronic controls for engineer, drive train, convenience, safety, and other vehicle systems; sensors, actuators, and microprocessors for onboard use; electronic fuel control systems; vehicle electrical components and systems collision avoidance systems; electromagnetic compatibility in the vehicle environment; and electric vehicles and controls.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信