A Novel Latency-Aware Resource Allocation and Offloading Strategy With Improved Prioritization and DDQN for Edge-Enabled UDNs

IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Nidhi Sharma;Krishan Kumar
{"title":"A Novel Latency-Aware Resource Allocation and Offloading Strategy With Improved Prioritization and DDQN for Edge-Enabled UDNs","authors":"Nidhi Sharma;Krishan Kumar","doi":"10.1109/TNSM.2024.3434457","DOIUrl":null,"url":null,"abstract":"Driven by the vision of 6G, the need for diverse computation-intensive and delay-sensitive tasks continues to rise. The integration of mobile edge computing with the ultra-dense network is not only capable of handling traffic from a large number of smart devices but also delivers substantial processing capabilities to the users. This combined network is expected as an effective solution for meeting the latency-critical requirement and will enhance the quality of user experience. Nevertheless, when a massive number of devices offload tasks to edge servers, the problem of channel interference, network load and energy shortage of user devices (UDs) would increase. Therefore, we investigate the joint uplink and downlink resource allocation and task offloading optimization problem in terms of minimizing the overall task delay while sustaining the UD battery life. Thus, to achieve long-term gains while making quick decisions, we propose an improved double deep Q-network scheme named Prioritized double deep Q-network. In this, the prioritized experience replay has been improved by considering the experience freshness factor along with temporal difference error to achieve fast and efficient learning. Extensive numerical results prove the efficacy of the proposed scheme by analyzing delay and energy consumption. Especially, our scheme can considerably decrease the delay by 11.86%, 26.22%, 48.56%, and 61.04% compared to the OELO scheme, DQN scheme, LOS, and EOS, respectively, when the number of UDs varied from 30 to 180.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6260-6272"},"PeriodicalIF":4.7000,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network and Service Management","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10609965/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Driven by the vision of 6G, the need for diverse computation-intensive and delay-sensitive tasks continues to rise. The integration of mobile edge computing with the ultra-dense network is not only capable of handling traffic from a large number of smart devices but also delivers substantial processing capabilities to the users. This combined network is expected as an effective solution for meeting the latency-critical requirement and will enhance the quality of user experience. Nevertheless, when a massive number of devices offload tasks to edge servers, the problem of channel interference, network load and energy shortage of user devices (UDs) would increase. Therefore, we investigate the joint uplink and downlink resource allocation and task offloading optimization problem in terms of minimizing the overall task delay while sustaining the UD battery life. Thus, to achieve long-term gains while making quick decisions, we propose an improved double deep Q-network scheme named Prioritized double deep Q-network. In this, the prioritized experience replay has been improved by considering the experience freshness factor along with temporal difference error to achieve fast and efficient learning. Extensive numerical results prove the efficacy of the proposed scheme by analyzing delay and energy consumption. Especially, our scheme can considerably decrease the delay by 11.86%, 26.22%, 48.56%, and 61.04% compared to the OELO scheme, DQN scheme, LOS, and EOS, respectively, when the number of UDs varied from 30 to 180.
针对边缘 UDN 的改进优先级和 DDQN 的新型延迟感知资源分配和卸载策略
在6G愿景的推动下,对各种计算密集型和延迟敏感型任务的需求持续上升。移动边缘计算与超密集网络的融合,不仅能够处理来自大量智能设备的流量,还能为用户提供大量的处理能力。这种组合网络有望成为满足延迟关键需求的有效解决方案,并将提高用户体验的质量。然而,当大量设备将任务卸载到边缘服务器时,会增加信道干扰、网络负载和用户设备(UDs)能量不足的问题。因此,我们从最小化整体任务延迟的同时保持UD电池寿命的角度出发,研究了联合上下行资源分配和任务卸载优化问题。因此,为了在快速决策的同时获得长期收益,我们提出了一种改进的双深度q网络方案,称为优先级双深度q网络。其中,通过考虑体验新鲜度因子和时间差误差对优先体验重放进行了改进,实现了快速高效的学习。大量的数值结果通过分析时延和能耗证明了该方案的有效性。尤其在UDs数量从30到180的范围内,与OELO方案、DQN方案、LOS方案和EOS方案相比,我们的方案可以显著降低时延,分别降低11.86%、26.22%、48.56%和61.04%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Network and Service Management
IEEE Transactions on Network and Service Management Computer Science-Computer Networks and Communications
CiteScore
9.30
自引率
15.10%
发文量
325
期刊介绍: IEEE Transactions on Network and Service Management will publish (online only) peerreviewed archival quality papers that advance the state-of-the-art and practical applications of network and service management. Theoretical research contributions (presenting new concepts and techniques) and applied contributions (reporting on experiences and experiments with actual systems) will be encouraged. These transactions will focus on the key technical issues related to: Management Models, Architectures and Frameworks; Service Provisioning, Reliability and Quality Assurance; Management Functions; Enabling Technologies; Information and Communication Models; Policies; Applications and Case Studies; Emerging Technologies and Standards.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信