探索鲁棒性量化的边界:非线性能量收集物联网网络的 DRL 考察

IF 3.7 3区 计算机科学 Q2 TELECOMMUNICATIONS
Ali Asgher Mohammed;Mirza Wasay Baig;Muhammad Abdullah Sohail;Syed Asad Ullah;Haejoon Jung;Syed Ali Hassan
{"title":"探索鲁棒性量化的边界:非线性能量收集物联网网络的 DRL 考察","authors":"Ali Asgher Mohammed;Mirza Wasay Baig;Muhammad Abdullah Sohail;Syed Asad Ullah;Haejoon Jung;Syed Ali Hassan","doi":"10.1109/LCOMM.2024.3451702","DOIUrl":null,"url":null,"abstract":"This letter investigates the uplink communication of an energy harvesting (EH)-enabled resource-constrained secondary device (RCSD) coexisting with primary devices in a cognitive radio-aided non-orthogonal multi-access (CR-NOMA) network. Assuming a non-linear EH model in practice, the data rate of the RCSD is maximized using deep reinforcement learning (DRL). We first derive the optimal solutions for the parameters of interest including the time-sharing coefficient and transmit power of the RCSD, using convex optimization and then implement the DRL to address a continuous action spaced optimization problem. To comprehensively assess the agent’s performance and adaptability, we implement various DRL algorithms and compare them under non-linear EH, which reveals their suitability in various scenarios, aiding in selecting the most effective approach.","PeriodicalId":13197,"journal":{"name":"IEEE Communications Letters","volume":"28 10","pages":"2447-2451"},"PeriodicalIF":3.7000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Navigating Boundaries in Quantifying Robustness: A DRL Expedition for Non-Linear Energy Harvesting IoT Networks\",\"authors\":\"Ali Asgher Mohammed;Mirza Wasay Baig;Muhammad Abdullah Sohail;Syed Asad Ullah;Haejoon Jung;Syed Ali Hassan\",\"doi\":\"10.1109/LCOMM.2024.3451702\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This letter investigates the uplink communication of an energy harvesting (EH)-enabled resource-constrained secondary device (RCSD) coexisting with primary devices in a cognitive radio-aided non-orthogonal multi-access (CR-NOMA) network. Assuming a non-linear EH model in practice, the data rate of the RCSD is maximized using deep reinforcement learning (DRL). We first derive the optimal solutions for the parameters of interest including the time-sharing coefficient and transmit power of the RCSD, using convex optimization and then implement the DRL to address a continuous action spaced optimization problem. To comprehensively assess the agent’s performance and adaptability, we implement various DRL algorithms and compare them under non-linear EH, which reveals their suitability in various scenarios, aiding in selecting the most effective approach.\",\"PeriodicalId\":13197,\"journal\":{\"name\":\"IEEE Communications Letters\",\"volume\":\"28 10\",\"pages\":\"2447-2451\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Communications Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10659082/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Communications Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10659082/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

本文研究了在认知无线电辅助非正交多址(CR-NOMA)网络中与主设备共存的、支持能量收集(EH)的资源受限次设备(RCSD)的上行链路通信。假设实际中存在非线性 EH 模型,则可利用深度强化学习(DRL)最大化 RCSD 的数据传输速率。我们首先利用凸优化推导出相关参数的最优解,包括 RCSD 的分时系数和发射功率,然后实施 DRL 来解决连续行动间隔优化问题。为了全面评估代理的性能和适应性,我们实施了各种 DRL 算法,并在非线性 EH 条件下对它们进行了比较,从而揭示了它们在各种情况下的适用性,有助于选择最有效的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Navigating Boundaries in Quantifying Robustness: A DRL Expedition for Non-Linear Energy Harvesting IoT Networks
This letter investigates the uplink communication of an energy harvesting (EH)-enabled resource-constrained secondary device (RCSD) coexisting with primary devices in a cognitive radio-aided non-orthogonal multi-access (CR-NOMA) network. Assuming a non-linear EH model in practice, the data rate of the RCSD is maximized using deep reinforcement learning (DRL). We first derive the optimal solutions for the parameters of interest including the time-sharing coefficient and transmit power of the RCSD, using convex optimization and then implement the DRL to address a continuous action spaced optimization problem. To comprehensively assess the agent’s performance and adaptability, we implement various DRL algorithms and compare them under non-linear EH, which reveals their suitability in various scenarios, aiding in selecting the most effective approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Communications Letters
IEEE Communications Letters 工程技术-电信学
CiteScore
8.10
自引率
7.30%
发文量
590
审稿时长
2.8 months
期刊介绍: The IEEE Communications Letters publishes short papers in a rapid publication cycle on advances in the state-of-the-art of communication over different media and channels including wire, underground, waveguide, optical fiber, and storage channels. Both theoretical contributions (including new techniques, concepts, and analyses) and practical contributions (including system experiments and prototypes, and new applications) are encouraged. This journal focuses on the physical layer and the link layer of communication systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信