Ruidong Zhang, Jiadong Zhang, Xue Wang, Wenxiao Shi
{"title":"Utility optimization for computation offloading and splitting in time-varying HAP and LEO satellite integrated MEC networks","authors":"Ruidong Zhang, Jiadong Zhang, Xue Wang, Wenxiao Shi","doi":"10.1016/j.comnet.2024.110866","DOIUrl":null,"url":null,"abstract":"<div><div>To provide ubiquitous and low-latency communication and computation services for remote and disaster areas, high altitude platform (HAP) and low earth orbit (LEO) satellite integrated multi-access edge computing (HLS-MEC) networks have emerged as a promising solution. However, most current studies directly assume that the number of connected satellites is fixed and neglect the modeling of the time-varying multi-satellite computing process. Motivated by this, we establish an M/G/K(t) queuing model to illustrate task computation on satellites. To evaluate the efficiency and quality of computation offloading and splitting, we develop a utility model. This model is defined as a difference between a value function that assesses the trade-offs of task offloading, considering latency reductions and energy savings, and a cost function that quantifies expenses related to latency and energy consumption. After formulating the utility maximization problem, we propose the deep reinforcement learning-based offloading and splitting (DBOS) scheme that can overcome the time-varying uncertainties and high dynamics in the HLS-MEC network. Specifically, the DBOS scheme can learn the best computation offloading and splitting policy to maximize the utility by sensing the number of connected satellites, the distance between the HAP and satellites, the available computing resources, and the task arrival rate. Finally, we evaluate and validate the computational complexity and convergence property of the DBOS scheme. Numerical results show that the DBOS scheme outperforms the other three benchmarks and maximizes the utility under time-varying dynamics.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128624006984","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
To provide ubiquitous and low-latency communication and computation services for remote and disaster areas, high altitude platform (HAP) and low earth orbit (LEO) satellite integrated multi-access edge computing (HLS-MEC) networks have emerged as a promising solution. However, most current studies directly assume that the number of connected satellites is fixed and neglect the modeling of the time-varying multi-satellite computing process. Motivated by this, we establish an M/G/K(t) queuing model to illustrate task computation on satellites. To evaluate the efficiency and quality of computation offloading and splitting, we develop a utility model. This model is defined as a difference between a value function that assesses the trade-offs of task offloading, considering latency reductions and energy savings, and a cost function that quantifies expenses related to latency and energy consumption. After formulating the utility maximization problem, we propose the deep reinforcement learning-based offloading and splitting (DBOS) scheme that can overcome the time-varying uncertainties and high dynamics in the HLS-MEC network. Specifically, the DBOS scheme can learn the best computation offloading and splitting policy to maximize the utility by sensing the number of connected satellites, the distance between the HAP and satellites, the available computing resources, and the task arrival rate. Finally, we evaluate and validate the computational complexity and convergence property of the DBOS scheme. Numerical results show that the DBOS scheme outperforms the other three benchmarks and maximizes the utility under time-varying dynamics.
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.