{"title":"针对能量收集 CIoT 中联合功率控制和接入协调的深度强化学习","authors":"Nada Abdel Khalek;Nadia Abdolkhani;Walaa Hamouda","doi":"10.1109/JIOT.2024.3416371","DOIUrl":null,"url":null,"abstract":"The Internet of Things (IoT) has attracted a lot of interest owing to its various applications. Cognitive IoT (CIoT) networks utilize the cognitive radio (CR) technology to relieve spectrum congestion and boost network performance. In this context, this article proposes a novel deep reinforcement learning (DRL) approach for joint power control and channel access coordination, tailored to energy-constrained CIoT networks. Unlike the existing works, our approach considers coordination dynamics between the competing devices and adopts a realistic energy harvesting (EH) model. The goal of the CIoT transmitter is to meet the interference constraint imposed by the primary network and coordinate channel access with the other CIoT devices while optimizing its lifetime and performance. We model the joint power control and access coordination problem as a model-free Markov decision process (MDP) and introduce a novel deep Q-network (DQN) architecture. This architecture enables a CIoT transmitter to autonomously make decisions regarding EH and data transmission, while also regulating transmit power to maximize the network’s performance and lifetime. These decisions incorporate critical factors, such as channel occupancy by other devices, EH opportunities, and interference constraints without prior knowledge. Through extensive simulations we demonstrate that the proposed DQN strategy achieves faster convergence than the benchmarks, facilitating adaptive, energy-efficient, and realistic spectrum sharing in CIoT networks. Additionally, our algorithm consistently achieves higher performance in terms of average sum rate, interference ratio, and rewards compared to the benchmarks.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"11 19","pages":"30833-30846"},"PeriodicalIF":8.9000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Reinforcement Learning for Joint Power Control and Access Coordination in Energy Harvesting CIoT\",\"authors\":\"Nada Abdel Khalek;Nadia Abdolkhani;Walaa Hamouda\",\"doi\":\"10.1109/JIOT.2024.3416371\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Internet of Things (IoT) has attracted a lot of interest owing to its various applications. Cognitive IoT (CIoT) networks utilize the cognitive radio (CR) technology to relieve spectrum congestion and boost network performance. In this context, this article proposes a novel deep reinforcement learning (DRL) approach for joint power control and channel access coordination, tailored to energy-constrained CIoT networks. Unlike the existing works, our approach considers coordination dynamics between the competing devices and adopts a realistic energy harvesting (EH) model. The goal of the CIoT transmitter is to meet the interference constraint imposed by the primary network and coordinate channel access with the other CIoT devices while optimizing its lifetime and performance. We model the joint power control and access coordination problem as a model-free Markov decision process (MDP) and introduce a novel deep Q-network (DQN) architecture. This architecture enables a CIoT transmitter to autonomously make decisions regarding EH and data transmission, while also regulating transmit power to maximize the network’s performance and lifetime. These decisions incorporate critical factors, such as channel occupancy by other devices, EH opportunities, and interference constraints without prior knowledge. Through extensive simulations we demonstrate that the proposed DQN strategy achieves faster convergence than the benchmarks, facilitating adaptive, energy-efficient, and realistic spectrum sharing in CIoT networks. Additionally, our algorithm consistently achieves higher performance in terms of average sum rate, interference ratio, and rewards compared to the benchmarks.\",\"PeriodicalId\":54347,\"journal\":{\"name\":\"IEEE Internet of Things Journal\",\"volume\":\"11 19\",\"pages\":\"30833-30846\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Internet of Things Journal\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10601690/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10601690/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Deep Reinforcement Learning for Joint Power Control and Access Coordination in Energy Harvesting CIoT
The Internet of Things (IoT) has attracted a lot of interest owing to its various applications. Cognitive IoT (CIoT) networks utilize the cognitive radio (CR) technology to relieve spectrum congestion and boost network performance. In this context, this article proposes a novel deep reinforcement learning (DRL) approach for joint power control and channel access coordination, tailored to energy-constrained CIoT networks. Unlike the existing works, our approach considers coordination dynamics between the competing devices and adopts a realistic energy harvesting (EH) model. The goal of the CIoT transmitter is to meet the interference constraint imposed by the primary network and coordinate channel access with the other CIoT devices while optimizing its lifetime and performance. We model the joint power control and access coordination problem as a model-free Markov decision process (MDP) and introduce a novel deep Q-network (DQN) architecture. This architecture enables a CIoT transmitter to autonomously make decisions regarding EH and data transmission, while also regulating transmit power to maximize the network’s performance and lifetime. These decisions incorporate critical factors, such as channel occupancy by other devices, EH opportunities, and interference constraints without prior knowledge. Through extensive simulations we demonstrate that the proposed DQN strategy achieves faster convergence than the benchmarks, facilitating adaptive, energy-efficient, and realistic spectrum sharing in CIoT networks. Additionally, our algorithm consistently achieves higher performance in terms of average sum rate, interference ratio, and rewards compared to the benchmarks.
期刊介绍:
The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.