Multi-Agent Deep Reinforcement Learning Based Computation Offloading Approach for LEO Satellite Broadband Networks

Junyu Lai, Huashuo Liu, Yusong Sun, Junhong Zhu, Wanyi Ma, Lianqiang Gan
{"title":"Multi-Agent Deep Reinforcement Learning Based Computation Offloading Approach for LEO Satellite Broadband Networks","authors":"Junyu Lai, Huashuo Liu, Yusong Sun, Junhong Zhu, Wanyi Ma, Lianqiang Gan","doi":"10.1109/ISCC58397.2023.10218146","DOIUrl":null,"url":null,"abstract":"Conventional computation offloading approaches are originally designed for ground networks, and are not effective for low earth orbit (LEO) satellite networks. This paper proposes a multi-agent deep reinforcement learning (MADRL) algorithm for making multi-level offloading decisions in LEO satellite networks. Offloading is formulated as a partially observable Markov decision process based multi-agent decision problem. Each satellite as an agent either conducts a received task, forwards it to neighbors, or sends it to ground clouds based on its own policy. These agents are independent and their deep neural networks to make offloading decisions share identical parameter values and are trained by using the same replay buffer. A centralized training and distributed executing mechanism is adopted to ensure that agents can make globally optimized offloading decisions. Comparative experiments demonstrate that the proposed MADRL algorithm outperforms the five baselines in terms of task processing delay and bandwidth consumption with acceptable computational complexity.","PeriodicalId":265337,"journal":{"name":"2023 IEEE Symposium on Computers and Communications (ISCC)","volume":"59 1","pages":"1435-1440"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Symposium on Computers and Communications (ISCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCC58397.2023.10218146","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Conventional computation offloading approaches are originally designed for ground networks, and are not effective for low earth orbit (LEO) satellite networks. This paper proposes a multi-agent deep reinforcement learning (MADRL) algorithm for making multi-level offloading decisions in LEO satellite networks. Offloading is formulated as a partially observable Markov decision process based multi-agent decision problem. Each satellite as an agent either conducts a received task, forwards it to neighbors, or sends it to ground clouds based on its own policy. These agents are independent and their deep neural networks to make offloading decisions share identical parameter values and are trained by using the same replay buffer. A centralized training and distributed executing mechanism is adopted to ensure that agents can make globally optimized offloading decisions. Comparative experiments demonstrate that the proposed MADRL algorithm outperforms the five baselines in terms of task processing delay and bandwidth consumption with acceptable computational complexity.
基于多代理深度强化学习的低地轨道卫星宽带网络计算卸载方法
传统的计算卸载方法最初是为地面网络设计的,对低地球轨道(LEO)卫星网络无效。本文提出了一种多代理深度强化学习(MADRL)算法,用于在低地球轨道卫星网络中做出多级卸载决策。卸载被表述为一个基于多代理决策问题的部分可观测马尔可夫决策过程。每颗卫星作为一个代理,根据自己的策略执行接收任务、转发给邻居或发送给地面云。这些代理是独立的,其用于做出卸载决策的深度神经网络共享相同的参数值,并通过使用相同的重放缓冲区进行训练。采用集中式训练和分布式执行机制,确保代理能做出全局优化的卸载决策。对比实验证明,所提出的 MADRL 算法在任务处理延迟和带宽消耗方面优于五种基线算法,且计算复杂度可接受。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信