{"title":"面向上行NOMA系统的节能动态资源分配:单单元和多单元NOMA系统的深度强化学习","authors":"Ayman Rabee;Imad Barhumi","doi":"10.1109/TVT.2025.3540940","DOIUrl":null,"url":null,"abstract":"Non-orthogonal multiple access (NOMA) is a key technology for future wireless networks, enabling improved spectral efficiency and massive device connectivity. However, optimizing power allocation, subchannel assignment, and cell association in multi-cell uplink NOMA systems is challenging due to user mobility and the NP-hard nature of the problem. This paper addresses these challenges by formulating the problem as a mixed-integer non-linear programming (MINLP) model to maximize energy efficiency (EE). We propose a deep reinforcement learning framework that employs deep Q-networks (DQN) for cell association and subchannel assignment, and twin delayed deep deterministic policy gradient (TD3) for power allocation. Simulation results reveal significant EE improvements, with multi-agent TD3 (MATD3) outperforming traditional Lagrange methods and multi-agent deep deterministic policy gradient (MADDPG). Furthermore, the proposed method exhibits robust adaptability to user mobility and superior performance in multi-cell environments, effectively mitigating inter-cell interference and enhancing resource allocation in dynamic scenarios.","PeriodicalId":13421,"journal":{"name":"IEEE Transactions on Vehicular Technology","volume":"74 6","pages":"9313-9327"},"PeriodicalIF":7.1000,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward Energy-Efficient Dynamic Resource Allocation in Uplink NOMA Systems: Deep Reinforcement Learning for Single and Multi-Cell NOMA Systems\",\"authors\":\"Ayman Rabee;Imad Barhumi\",\"doi\":\"10.1109/TVT.2025.3540940\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Non-orthogonal multiple access (NOMA) is a key technology for future wireless networks, enabling improved spectral efficiency and massive device connectivity. However, optimizing power allocation, subchannel assignment, and cell association in multi-cell uplink NOMA systems is challenging due to user mobility and the NP-hard nature of the problem. This paper addresses these challenges by formulating the problem as a mixed-integer non-linear programming (MINLP) model to maximize energy efficiency (EE). We propose a deep reinforcement learning framework that employs deep Q-networks (DQN) for cell association and subchannel assignment, and twin delayed deep deterministic policy gradient (TD3) for power allocation. Simulation results reveal significant EE improvements, with multi-agent TD3 (MATD3) outperforming traditional Lagrange methods and multi-agent deep deterministic policy gradient (MADDPG). Furthermore, the proposed method exhibits robust adaptability to user mobility and superior performance in multi-cell environments, effectively mitigating inter-cell interference and enhancing resource allocation in dynamic scenarios.\",\"PeriodicalId\":13421,\"journal\":{\"name\":\"IEEE Transactions on Vehicular Technology\",\"volume\":\"74 6\",\"pages\":\"9313-9327\"},\"PeriodicalIF\":7.1000,\"publicationDate\":\"2025-02-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Vehicular Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10882994/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Vehicular Technology","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10882994/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Toward Energy-Efficient Dynamic Resource Allocation in Uplink NOMA Systems: Deep Reinforcement Learning for Single and Multi-Cell NOMA Systems
Non-orthogonal multiple access (NOMA) is a key technology for future wireless networks, enabling improved spectral efficiency and massive device connectivity. However, optimizing power allocation, subchannel assignment, and cell association in multi-cell uplink NOMA systems is challenging due to user mobility and the NP-hard nature of the problem. This paper addresses these challenges by formulating the problem as a mixed-integer non-linear programming (MINLP) model to maximize energy efficiency (EE). We propose a deep reinforcement learning framework that employs deep Q-networks (DQN) for cell association and subchannel assignment, and twin delayed deep deterministic policy gradient (TD3) for power allocation. Simulation results reveal significant EE improvements, with multi-agent TD3 (MATD3) outperforming traditional Lagrange methods and multi-agent deep deterministic policy gradient (MADDPG). Furthermore, the proposed method exhibits robust adaptability to user mobility and superior performance in multi-cell environments, effectively mitigating inter-cell interference and enhancing resource allocation in dynamic scenarios.
期刊介绍:
The scope of the Transactions is threefold (which was approved by the IEEE Periodicals Committee in 1967) and is published on the journal website as follows: Communications: The use of mobile radio on land, sea, and air, including cellular radio, two-way radio, and one-way radio, with applications to dispatch and control vehicles, mobile radiotelephone, radio paging, and status monitoring and reporting. Related areas include spectrum usage, component radio equipment such as cavities and antennas, compute control for radio systems, digital modulation and transmission techniques, mobile radio circuit design, radio propagation for vehicular communications, effects of ignition noise and radio frequency interference, and consideration of the vehicle as part of the radio operating environment. Transportation Systems: The use of electronic technology for the control of ground transportation systems including, but not limited to, traffic aid systems; traffic control systems; automatic vehicle identification, location, and monitoring systems; automated transport systems, with single and multiple vehicle control; and moving walkways or people-movers. Vehicular Electronics: The use of electronic or electrical components and systems for control, propulsion, or auxiliary functions, including but not limited to, electronic controls for engineer, drive train, convenience, safety, and other vehicle systems; sensors, actuators, and microprocessors for onboard use; electronic fuel control systems; vehicle electrical components and systems collision avoidance systems; electromagnetic compatibility in the vehicle environment; and electric vehicles and controls.