{"title":"基于深度q网络的多智能体深度强化学习在NOMA无线系统中的能量效率和资源分配","authors":"K. R. Chandra, Somasekhar Borugadda","doi":"10.1109/ICEEICT56924.2023.10157052","DOIUrl":null,"url":null,"abstract":"In recent years, there has been an increase in demand for wireless cellular networks to have higher capacity. Operating costs have increased because operators use more energy to build new cell sites or boost the transmission power at existing locations to satisfy demand. Since energy costs are so high, lowering them must be a top goal. Non-orthogonal multiple access (NOMA), which has increased efficiency, has become a practical multiple access technique in wireless network construction. To improve energy efficiency and reduce power consumption, this paper proposes a Deep Q-Network policy with a novel power allocation method for NOMA-enabled network devices. The Multi-Agent Deep Reinforcement Learning (MADRL) with Deep Q-Network (DQN) model is presented for simultaneous wireless information and power transfer in NOMA-enabled devices. We investigate ways to increase the total transmission rate simultaneously and collect energy while meeting each NOMA system's minimum transmission rate and harvested energy requirements using the power splitting (PS) approach. To create an objective function, combine the transmission rates from information decoding with the transformed throughput from energy harvesting. We investigate wireless network development delays and dynamic energy-efficient resource allocation. We develop the resource allocation (i.e., time allocation and power control) problem as a dynamic stochastic optimization model that maximizes system energy efficiency (EE) while simultaneously satisfying a certain quality of service (QoS) in terms of delay. While ensuring throughput and fairness, MADRL-DQN enables the system to maximize energy efficiency; DQN allows energy savings by reducing the number of resources assigned to a user when signal traffic transmission dominates energy utilization. Compared to the methods already in use, the simulation results demonstrated the effectiveness of the proposed MADRL-DQN resource allocation algorithm.","PeriodicalId":345324,"journal":{"name":"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi Agent Deep Reinforcement learning with Deep Q-Network based energy efficiency and resource allocation in NOMA wireless Systems\",\"authors\":\"K. R. Chandra, Somasekhar Borugadda\",\"doi\":\"10.1109/ICEEICT56924.2023.10157052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, there has been an increase in demand for wireless cellular networks to have higher capacity. Operating costs have increased because operators use more energy to build new cell sites or boost the transmission power at existing locations to satisfy demand. Since energy costs are so high, lowering them must be a top goal. Non-orthogonal multiple access (NOMA), which has increased efficiency, has become a practical multiple access technique in wireless network construction. To improve energy efficiency and reduce power consumption, this paper proposes a Deep Q-Network policy with a novel power allocation method for NOMA-enabled network devices. The Multi-Agent Deep Reinforcement Learning (MADRL) with Deep Q-Network (DQN) model is presented for simultaneous wireless information and power transfer in NOMA-enabled devices. We investigate ways to increase the total transmission rate simultaneously and collect energy while meeting each NOMA system's minimum transmission rate and harvested energy requirements using the power splitting (PS) approach. To create an objective function, combine the transmission rates from information decoding with the transformed throughput from energy harvesting. We investigate wireless network development delays and dynamic energy-efficient resource allocation. We develop the resource allocation (i.e., time allocation and power control) problem as a dynamic stochastic optimization model that maximizes system energy efficiency (EE) while simultaneously satisfying a certain quality of service (QoS) in terms of delay. While ensuring throughput and fairness, MADRL-DQN enables the system to maximize energy efficiency; DQN allows energy savings by reducing the number of resources assigned to a user when signal traffic transmission dominates energy utilization. Compared to the methods already in use, the simulation results demonstrated the effectiveness of the proposed MADRL-DQN resource allocation algorithm.\",\"PeriodicalId\":345324,\"journal\":{\"name\":\"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICEEICT56924.2023.10157052\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Second International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEEICT56924.2023.10157052","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi Agent Deep Reinforcement learning with Deep Q-Network based energy efficiency and resource allocation in NOMA wireless Systems
In recent years, there has been an increase in demand for wireless cellular networks to have higher capacity. Operating costs have increased because operators use more energy to build new cell sites or boost the transmission power at existing locations to satisfy demand. Since energy costs are so high, lowering them must be a top goal. Non-orthogonal multiple access (NOMA), which has increased efficiency, has become a practical multiple access technique in wireless network construction. To improve energy efficiency and reduce power consumption, this paper proposes a Deep Q-Network policy with a novel power allocation method for NOMA-enabled network devices. The Multi-Agent Deep Reinforcement Learning (MADRL) with Deep Q-Network (DQN) model is presented for simultaneous wireless information and power transfer in NOMA-enabled devices. We investigate ways to increase the total transmission rate simultaneously and collect energy while meeting each NOMA system's minimum transmission rate and harvested energy requirements using the power splitting (PS) approach. To create an objective function, combine the transmission rates from information decoding with the transformed throughput from energy harvesting. We investigate wireless network development delays and dynamic energy-efficient resource allocation. We develop the resource allocation (i.e., time allocation and power control) problem as a dynamic stochastic optimization model that maximizes system energy efficiency (EE) while simultaneously satisfying a certain quality of service (QoS) in terms of delay. While ensuring throughput and fairness, MADRL-DQN enables the system to maximize energy efficiency; DQN allows energy savings by reducing the number of resources assigned to a user when signal traffic transmission dominates energy utilization. Compared to the methods already in use, the simulation results demonstrated the effectiveness of the proposed MADRL-DQN resource allocation algorithm.