基于深度q网络的多智能体深度强化学习在NOMA无线系统中的能量效率和资源分配

K. R. Chandra, Somasekhar Borugadda
{"title":"基于深度q网络的多智能体深度强化学习在NOMA无线系统中的能量效率和资源分配","authors":"K. R. Chandra, Somasekhar Borugadda","doi":"10.1109/ICEEICT56924.2023.10157052","DOIUrl":null,"url":null,"abstract":"In recent years, there has been an increase in demand for wireless cellular networks to have higher capacity. Operating costs have increased because operators use more energy to build new cell sites or boost the transmission power at existing locations to satisfy demand. Since energy costs are so high, lowering them must be a top goal. Non-orthogonal multiple access (NOMA), which has increased efficiency, has become a practical multiple access technique in wireless network construction. To improve energy efficiency and reduce power consumption, this paper proposes a Deep Q-Network policy with a novel power allocation method for NOMA-enabled network devices. The Multi-Agent Deep Reinforcement Learning (MADRL) with Deep Q-Network (DQN) model is presented for simultaneous wireless information and power transfer in NOMA-enabled devices. We investigate ways to increase the total transmission rate simultaneously and collect energy while meeting each NOMA system's minimum transmission rate and harvested energy requirements using the power splitting (PS) approach. To create an objective function, combine the transmission rates from information decoding with the transformed throughput from energy harvesting. We investigate wireless network development delays and dynamic energy-efficient resource allocation. We develop the resource allocation (i.e., time allocation and power control) problem as a dynamic stochastic optimization model that maximizes system energy efficiency (EE) while simultaneously satisfying a certain quality of service (QoS) in terms of delay. While ensuring throughput and fairness, MADRL-DQN enables the system to maximize energy efficiency; DQN allows energy savings by reducing the number of resources assigned to a user when signal traffic transmission dominates energy utilization. Compared to the methods already in use, the simulation results demonstrated the effectiveness of the proposed MADRL-DQN resource allocation algorithm.","PeriodicalId":345324,"journal":{"name":"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi Agent Deep Reinforcement learning with Deep Q-Network based energy efficiency and resource allocation in NOMA wireless Systems\",\"authors\":\"K. R. Chandra, Somasekhar Borugadda\",\"doi\":\"10.1109/ICEEICT56924.2023.10157052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, there has been an increase in demand for wireless cellular networks to have higher capacity. Operating costs have increased because operators use more energy to build new cell sites or boost the transmission power at existing locations to satisfy demand. Since energy costs are so high, lowering them must be a top goal. Non-orthogonal multiple access (NOMA), which has increased efficiency, has become a practical multiple access technique in wireless network construction. To improve energy efficiency and reduce power consumption, this paper proposes a Deep Q-Network policy with a novel power allocation method for NOMA-enabled network devices. The Multi-Agent Deep Reinforcement Learning (MADRL) with Deep Q-Network (DQN) model is presented for simultaneous wireless information and power transfer in NOMA-enabled devices. We investigate ways to increase the total transmission rate simultaneously and collect energy while meeting each NOMA system's minimum transmission rate and harvested energy requirements using the power splitting (PS) approach. To create an objective function, combine the transmission rates from information decoding with the transformed throughput from energy harvesting. We investigate wireless network development delays and dynamic energy-efficient resource allocation. We develop the resource allocation (i.e., time allocation and power control) problem as a dynamic stochastic optimization model that maximizes system energy efficiency (EE) while simultaneously satisfying a certain quality of service (QoS) in terms of delay. While ensuring throughput and fairness, MADRL-DQN enables the system to maximize energy efficiency; DQN allows energy savings by reducing the number of resources assigned to a user when signal traffic transmission dominates energy utilization. Compared to the methods already in use, the simulation results demonstrated the effectiveness of the proposed MADRL-DQN resource allocation algorithm.\",\"PeriodicalId\":345324,\"journal\":{\"name\":\"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICEEICT56924.2023.10157052\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEEICT56924.2023.10157052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,对具有更高容量的无线蜂窝网络的需求有所增加。运营成本增加了,因为运营商需要更多的能源来建设新的蜂窝基站或提高现有基站的传输功率以满足需求。由于能源成本如此之高,降低成本必须成为首要目标。非正交多址(NOMA)技术提高了效率,已成为无线网络建设中一种实用的多址技术。为了提高能源效率和降低功耗,本文提出了一种具有新颖功率分配方法的深度Q-Network策略,用于支持noma的网络设备。提出了基于深度q -网络的多智能体深度强化学习(MADRL)模型,用于支持noma的设备中同时进行无线信息和电力传输。我们研究了使用功率分割(PS)方法同时提高总传输速率和收集能量的方法,同时满足每个NOMA系统的最小传输速率和收集能量的需求。为了创建一个目标函数,将信息解码的传输速率与能量收集的转换吞吐量结合起来。我们研究无线网络开发延迟和动态节能资源分配。我们将资源分配(即时间分配和功率控制)问题发展为一个动态随机优化模型,该模型在满足一定延迟服务质量(QoS)的同时最大化系统能源效率(EE)。在确保吞吐量和公平性的同时,MADRL-DQN使系统能够最大限度地提高能源效率;DQN通过减少分配给用户的资源数量来节省能源,当信号流量传输占能源利用的主导地位时。通过与已有方法的比较,仿真结果验证了MADRL-DQN资源分配算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multi Agent Deep Reinforcement learning with Deep Q-Network based energy efficiency and resource allocation in NOMA wireless Systems
In recent years, there has been an increase in demand for wireless cellular networks to have higher capacity. Operating costs have increased because operators use more energy to build new cell sites or boost the transmission power at existing locations to satisfy demand. Since energy costs are so high, lowering them must be a top goal. Non-orthogonal multiple access (NOMA), which has increased efficiency, has become a practical multiple access technique in wireless network construction. To improve energy efficiency and reduce power consumption, this paper proposes a Deep Q-Network policy with a novel power allocation method for NOMA-enabled network devices. The Multi-Agent Deep Reinforcement Learning (MADRL) with Deep Q-Network (DQN) model is presented for simultaneous wireless information and power transfer in NOMA-enabled devices. We investigate ways to increase the total transmission rate simultaneously and collect energy while meeting each NOMA system's minimum transmission rate and harvested energy requirements using the power splitting (PS) approach. To create an objective function, combine the transmission rates from information decoding with the transformed throughput from energy harvesting. We investigate wireless network development delays and dynamic energy-efficient resource allocation. We develop the resource allocation (i.e., time allocation and power control) problem as a dynamic stochastic optimization model that maximizes system energy efficiency (EE) while simultaneously satisfying a certain quality of service (QoS) in terms of delay. While ensuring throughput and fairness, MADRL-DQN enables the system to maximize energy efficiency; DQN allows energy savings by reducing the number of resources assigned to a user when signal traffic transmission dominates energy utilization. Compared to the methods already in use, the simulation results demonstrated the effectiveness of the proposed MADRL-DQN resource allocation algorithm.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信