{"title":"基于深度强化学习的DenseNet异构无线网络动态多通道接入","authors":"K. Zong","doi":"10.1109/ICCCWorkshops52231.2021.9538886","DOIUrl":null,"url":null,"abstract":"In this paper, we consider the problem of dynamic multi-channel access in the heterogeneous wireless networks, where multiple independent channels are shared by multiple nodes which have different types. The objective is to find a strategy to maximize the expected long-term probability of successful transmissions. The problem of dynamic multi-channel access can be formulated as a partially observable Markov decision process (POMDP). In order to deal with this problem, we apply the deep reinforcement learning (DRL) approach to provide a model-free access method, where the nodes don’t have a prior knowledge of the wireless networks or the ability to exchange messages with other nodes. Specially, we take advantage of the double deep Q-network (DDQN) with DenseNet to learn the wireless network environment and to select the optimal channel at the beginning of each time slot. We investigate the proposed DDQN approach in different environments for both the fixed-pattern scenarios and the time-varying scenarios. The experimental results show that the proposed DDQN with DenseNet can efficiently learn the pattern of channel switch and choose the near optimal action to avoid the collision for every slot. Besides, the proposed DDQN approach can also achieve satisfactory performance to adapt the time-varying scenarios.","PeriodicalId":335240,"journal":{"name":"2021 IEEE/CIC International Conference on Communications in China (ICCC Workshops)","volume":"212 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Deep Reinforcement Learning-Based Dynamic MultiChannel Access for Heterogeneous Wireless Networks with DenseNet\",\"authors\":\"K. Zong\",\"doi\":\"10.1109/ICCCWorkshops52231.2021.9538886\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we consider the problem of dynamic multi-channel access in the heterogeneous wireless networks, where multiple independent channels are shared by multiple nodes which have different types. The objective is to find a strategy to maximize the expected long-term probability of successful transmissions. The problem of dynamic multi-channel access can be formulated as a partially observable Markov decision process (POMDP). In order to deal with this problem, we apply the deep reinforcement learning (DRL) approach to provide a model-free access method, where the nodes don’t have a prior knowledge of the wireless networks or the ability to exchange messages with other nodes. Specially, we take advantage of the double deep Q-network (DDQN) with DenseNet to learn the wireless network environment and to select the optimal channel at the beginning of each time slot. We investigate the proposed DDQN approach in different environments for both the fixed-pattern scenarios and the time-varying scenarios. The experimental results show that the proposed DDQN with DenseNet can efficiently learn the pattern of channel switch and choose the near optimal action to avoid the collision for every slot. Besides, the proposed DDQN approach can also achieve satisfactory performance to adapt the time-varying scenarios.\",\"PeriodicalId\":335240,\"journal\":{\"name\":\"2021 IEEE/CIC International Conference on Communications in China (ICCC Workshops)\",\"volume\":\"212 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE/CIC International Conference on Communications in China (ICCC Workshops)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCWorkshops52231.2021.9538886\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/CIC International Conference on Communications in China (ICCC Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCWorkshops52231.2021.9538886","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Reinforcement Learning-Based Dynamic MultiChannel Access for Heterogeneous Wireless Networks with DenseNet
In this paper, we consider the problem of dynamic multi-channel access in the heterogeneous wireless networks, where multiple independent channels are shared by multiple nodes which have different types. The objective is to find a strategy to maximize the expected long-term probability of successful transmissions. The problem of dynamic multi-channel access can be formulated as a partially observable Markov decision process (POMDP). In order to deal with this problem, we apply the deep reinforcement learning (DRL) approach to provide a model-free access method, where the nodes don’t have a prior knowledge of the wireless networks or the ability to exchange messages with other nodes. Specially, we take advantage of the double deep Q-network (DDQN) with DenseNet to learn the wireless network environment and to select the optimal channel at the beginning of each time slot. We investigate the proposed DDQN approach in different environments for both the fixed-pattern scenarios and the time-varying scenarios. The experimental results show that the proposed DDQN with DenseNet can efficiently learn the pattern of channel switch and choose the near optimal action to avoid the collision for every slot. Besides, the proposed DDQN approach can also achieve satisfactory performance to adapt the time-varying scenarios.