A pipelining task offloading strategy via delay-aware multi-agent reinforcement learning in Cybertwin-enabled 6G network

IF 7.5 2区 计算机科学 Q1 TELECOMMUNICATIONS
Haiwen Niu , Luhan Wang , Keliang Du , Zhaoming Lu , Xiangming Wen , Yu Liu
{"title":"A pipelining task offloading strategy via delay-aware multi-agent reinforcement learning in Cybertwin-enabled 6G network","authors":"Haiwen Niu ,&nbsp;Luhan Wang ,&nbsp;Keliang Du ,&nbsp;Zhaoming Lu ,&nbsp;Xiangming Wen ,&nbsp;Yu Liu","doi":"10.1016/j.dcan.2023.04.004","DOIUrl":null,"url":null,"abstract":"<div><div>Cybertwin-enabled 6th Generation (6G) network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications. Multi-Agent Deep Reinforcement Learning (MADRL) technologies driven by Cybertwins have been proposed for adaptive task offloading strategies. However, the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works, which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance. In order to address this problem, we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process (MDP). Then, we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption. Firstly, the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property. Secondly, Gate Transformer-XL is introduced to capture historical actions' importance and maintain the consistent input dimension dynamically changed due to random transmission delays. Thirdly, a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones. Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"11 1","pages":"Pages 92-105"},"PeriodicalIF":7.5000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Communications and Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352864823000810","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Cybertwin-enabled 6th Generation (6G) network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications. Multi-Agent Deep Reinforcement Learning (MADRL) technologies driven by Cybertwins have been proposed for adaptive task offloading strategies. However, the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works, which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance. In order to address this problem, we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process (MDP). Then, we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption. Firstly, the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property. Secondly, Gate Transformer-XL is introduced to capture historical actions' importance and maintain the consistent input dimension dynamically changed due to random transmission delays. Thirdly, a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones. Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks.
基于延迟感知多智能体强化学习的6G网络流水线任务卸载策略
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Digital Communications and Networks
Digital Communications and Networks Computer Science-Hardware and Architecture
CiteScore
12.80
自引率
5.10%
发文量
915
审稿时长
30 weeks
期刊介绍: Digital Communications and Networks is a prestigious journal that emphasizes on communication systems and networks. We publish only top-notch original articles and authoritative reviews, which undergo rigorous peer-review. We are proud to announce that all our articles are fully Open Access and can be accessed on ScienceDirect. Our journal is recognized and indexed by eminent databases such as the Science Citation Index Expanded (SCIE) and Scopus. In addition to regular articles, we may also consider exceptional conference papers that have been significantly expanded. Furthermore, we periodically release special issues that focus on specific aspects of the field. In conclusion, Digital Communications and Networks is a leading journal that guarantees exceptional quality and accessibility for researchers and scholars in the field of communication systems and networks.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信