用于 MEC 动态网络计算卸载的深度强化方法

IF 1.9 4区 工程技术 Q2 Engineering
Yibiao Fan, Xiaowei Cai
{"title":"用于 MEC 动态网络计算卸载的深度强化方法","authors":"Yibiao Fan, Xiaowei Cai","doi":"10.1186/s13634-024-01142-2","DOIUrl":null,"url":null,"abstract":"<p>In this study, we investigate the challenges associated with dynamic time slot server selection in mobile edge computing (MEC) systems. This study considers the fluctuating nature of user access at edge servers and the various factors that influence server workload, including offloading policies, offloading ratios, users’ transmission power, and the servers’ reserved capacity. To streamline the process of selecting edge servers with an eye on long-term optimization, we cast the problem as a Markov Decision Process (MDP) and propose a Deep Reinforcement Learning (DRL)-based algorithm as a solution. Our approach involves learning the selection strategy by analyzing the performance of server selections in previous iterations. Simulation outcomes show that our DRL-based algorithm surpasses benchmarks, delivering minimal average latency.</p>","PeriodicalId":11816,"journal":{"name":"EURASIP Journal on Advances in Signal Processing","volume":"23 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A deep reinforcement approach for computation offloading in MEC dynamic networks\",\"authors\":\"Yibiao Fan, Xiaowei Cai\",\"doi\":\"10.1186/s13634-024-01142-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In this study, we investigate the challenges associated with dynamic time slot server selection in mobile edge computing (MEC) systems. This study considers the fluctuating nature of user access at edge servers and the various factors that influence server workload, including offloading policies, offloading ratios, users’ transmission power, and the servers’ reserved capacity. To streamline the process of selecting edge servers with an eye on long-term optimization, we cast the problem as a Markov Decision Process (MDP) and propose a Deep Reinforcement Learning (DRL)-based algorithm as a solution. Our approach involves learning the selection strategy by analyzing the performance of server selections in previous iterations. Simulation outcomes show that our DRL-based algorithm surpasses benchmarks, delivering minimal average latency.</p>\",\"PeriodicalId\":11816,\"journal\":{\"name\":\"EURASIP Journal on Advances in Signal Processing\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-04-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"EURASIP Journal on Advances in Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1186/s13634-024-01142-2\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"EURASIP Journal on Advances in Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1186/s13634-024-01142-2","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

在本研究中,我们探讨了移动边缘计算(MEC)系统中与动态时隙服务器选择相关的挑战。本研究考虑了用户访问边缘服务器的波动性以及影响服务器工作量的各种因素,包括卸载策略、卸载比率、用户传输功率和服务器的预留容量。为了简化选择边缘服务器的过程并着眼于长期优化,我们将该问题视为马尔可夫决策过程(Markov Decision Process,MDP),并提出了一种基于深度强化学习(Deep Reinforcement Learning,DRL)的算法作为解决方案。我们的方法包括通过分析之前迭代中服务器选择的性能来学习选择策略。仿真结果表明,我们基于 DRL 的算法超越了基准,提供了最小的平均延迟。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

A deep reinforcement approach for computation offloading in MEC dynamic networks

A deep reinforcement approach for computation offloading in MEC dynamic networks

In this study, we investigate the challenges associated with dynamic time slot server selection in mobile edge computing (MEC) systems. This study considers the fluctuating nature of user access at edge servers and the various factors that influence server workload, including offloading policies, offloading ratios, users’ transmission power, and the servers’ reserved capacity. To streamline the process of selecting edge servers with an eye on long-term optimization, we cast the problem as a Markov Decision Process (MDP) and propose a Deep Reinforcement Learning (DRL)-based algorithm as a solution. Our approach involves learning the selection strategy by analyzing the performance of server selections in previous iterations. Simulation outcomes show that our DRL-based algorithm surpasses benchmarks, delivering minimal average latency.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
EURASIP Journal on Advances in Signal Processing
EURASIP Journal on Advances in Signal Processing 工程技术-工程:电子与电气
CiteScore
3.50
自引率
10.50%
发文量
109
审稿时长
2.6 months
期刊介绍: The aim of the EURASIP Journal on Advances in Signal Processing is to highlight the theoretical and practical aspects of signal processing in new and emerging technologies. The journal is directed as much at the practicing engineer as at the academic researcher. Authors of articles with novel contributions to the theory and/or practice of signal processing are welcome to submit their articles for consideration.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信