{"title":"A deep reinforcement learning-based D2D spectrum allocation underlaying a cellular network","authors":"Yao-Jen Liang, Yu-Chan Tseng, Chi-Wen Hsieh","doi":"10.1007/s11276-024-03766-6","DOIUrl":null,"url":null,"abstract":"<p>We develop a deep reinforcement learning-based (DRL) spectrum access scheme for device-to-device communications in an underlay cellular network. Based on the DRL scheme, the base station aims to maximize the overall system throughput of both the D2D and cellular communications by learning an optimal spectrum allocation strategy. While D2D pairs dynamically access the time slots (TSs) of a shared spectrum belonging to a dedicated cellular user (CU). In particular, to ensure that the quality of service (QoS) requirement of cell-edge CUs, this paper addresses the various positions of CUs and D2D pairs by dividing the cellular area into shareable and un-shareable areas. Then, a double deep Q-network is adopted for the BS to decide whether and which D2D pair can access each TS within a shared spectrum. The proposed DDQN spectrum allocation not only enjoys low computational complexity since just current state information is utilized as input, but also approaches the throughput of exhaustive search method since received signal-to-noise ratios are utilized as inputs. Numerical results show that the proposed deep learning-based spectrum access scheme outperforms the state-of-art algorithms in terms of throughput.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"62 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wireless Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11276-024-03766-6","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
We develop a deep reinforcement learning-based (DRL) spectrum access scheme for device-to-device communications in an underlay cellular network. Based on the DRL scheme, the base station aims to maximize the overall system throughput of both the D2D and cellular communications by learning an optimal spectrum allocation strategy. While D2D pairs dynamically access the time slots (TSs) of a shared spectrum belonging to a dedicated cellular user (CU). In particular, to ensure that the quality of service (QoS) requirement of cell-edge CUs, this paper addresses the various positions of CUs and D2D pairs by dividing the cellular area into shareable and un-shareable areas. Then, a double deep Q-network is adopted for the BS to decide whether and which D2D pair can access each TS within a shared spectrum. The proposed DDQN spectrum allocation not only enjoys low computational complexity since just current state information is utilized as input, but also approaches the throughput of exhaustive search method since received signal-to-noise ratios are utilized as inputs. Numerical results show that the proposed deep learning-based spectrum access scheme outperforms the state-of-art algorithms in terms of throughput.
期刊介绍:
The wireless communication revolution is bringing fundamental changes to data networking, telecommunication, and is making integrated networks a reality. By freeing the user from the cord, personal communications networks, wireless LAN''s, mobile radio networks and cellular systems, harbor the promise of fully distributed mobile computing and communications, any time, anywhere.
Focusing on the networking and user aspects of the field, Wireless Networks provides a global forum for archival value contributions documenting these fast growing areas of interest. The journal publishes refereed articles dealing with research, experience and management issues of wireless networks. Its aim is to allow the reader to benefit from experience, problems and solutions described.