{"title":"STAR-RIS Assisted Secrecy Communication With Deep Reinforcement Learning","authors":"Miao Zhang;Xuran Ding;Yanqun Tang;Shixun Wu;Kai Xu","doi":"10.1109/TGCN.2024.3466189","DOIUrl":null,"url":null,"abstract":"In this paper, we investigate the secure transmission in a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)-assisted down link multiple-input single-output (MISO) wireless network. The secrecy rate is maximized by the joint design of the transmit beamforming, the transmission and reflection coefficients of the STAR-RIS, while satisfying the electromagnetic property of the STAR-RIS and transmit power limit of the base station. Since this communication network is in a dynamic environment, the optimization problem is non-convex and mathematically difficult to solve. To address this issue, two deep reinforcement learning (DRL)-based algorithms, namely soft actor-critic (SAC) algorithm and soft actor-critic based on loss-adjusted approximate actor prioritized experience replay (L3APER-SAC) are proposed to obtain the maximum reward by constantly interacting with and learning from the dynamic environment. Moreover, for the L3APER-SAC algorithm, to achieve higher performance and stability, we introduce two experience replay buffers—one is regular experience replay and the other is prioritized experience replay. Simulation results comprehensively assess the performance of two DRL algorithms and indicate that both proposed algorithms outperform benchmark approaches. Particularly, L3APER-SAC, exhibits superior performance, albeit with an associated increase in computational complexity.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 2","pages":"739-753"},"PeriodicalIF":5.3000,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Green Communications and Networking","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10689376/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we investigate the secure transmission in a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)-assisted down link multiple-input single-output (MISO) wireless network. The secrecy rate is maximized by the joint design of the transmit beamforming, the transmission and reflection coefficients of the STAR-RIS, while satisfying the electromagnetic property of the STAR-RIS and transmit power limit of the base station. Since this communication network is in a dynamic environment, the optimization problem is non-convex and mathematically difficult to solve. To address this issue, two deep reinforcement learning (DRL)-based algorithms, namely soft actor-critic (SAC) algorithm and soft actor-critic based on loss-adjusted approximate actor prioritized experience replay (L3APER-SAC) are proposed to obtain the maximum reward by constantly interacting with and learning from the dynamic environment. Moreover, for the L3APER-SAC algorithm, to achieve higher performance and stability, we introduce two experience replay buffers—one is regular experience replay and the other is prioritized experience replay. Simulation results comprehensively assess the performance of two DRL algorithms and indicate that both proposed algorithms outperform benchmark approaches. Particularly, L3APER-SAC, exhibits superior performance, albeit with an associated increase in computational complexity.