Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing

IF 3.6 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Xiaohu Gao, Mei Choo Ang, Sara A. Althubiti
{"title":"Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing","authors":"Xiaohu Gao, Mei Choo Ang, Sara A. Althubiti","doi":"10.1007/s10723-023-09708-4","DOIUrl":null,"url":null,"abstract":"<p>Mobile Edge Computing (MEC) offers cloud-like capabilities to mobile users, making it an up-and-coming method for advancing the Internet of Things (IoT). However, current approaches are limited by various factors such as network latency, bandwidth, energy consumption, task characteristics, and edge server overload. To address these limitations, this research propose a novel approach that integrates Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) and Markov Decision Problem for task offloading in MEC. Among DRL algorithms, the ITODDPG algorithm based on the DDPG algorithm and MDP is a popular choice for task offloading in MEC. Firstly, the ITODDPG algorithm formulates the task offloading problem in MEC as an MDP, which enables the agent to learn a policy that maximizes the expected cumulative reward. Secondly, ITODDPG employs a deep neural network to approximate the Q-function, which maps the state-action pairs to their expected cumulative rewards. Finally, the experimental results demonstrate that the ITODDPG algorithm outperforms the baseline algorithms regarding average compensation and convergence speed. In addition to its superior performance, our proposed approach can learn complex non-linear policies using DNN and an information-theoretic objective function to improve the performance of task offloading in MEC. Compared to traditional methods, our approach delivers improved performance, making it highly effective for developing IoT environments. Experimental trials were carried out, and the results indicate that the suggested approach can enhance performance compared to the other three baseline methods. It is highly scalable, capable of handling large and complex environments, and suitable for deployment in real-world scenarios, ensuring its widespread applicability to a diverse range of task offloading and MEC applications.</p>","PeriodicalId":54817,"journal":{"name":"Journal of Grid Computing","volume":"37 1","pages":""},"PeriodicalIF":3.6000,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Grid Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10723-023-09708-4","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Mobile Edge Computing (MEC) offers cloud-like capabilities to mobile users, making it an up-and-coming method for advancing the Internet of Things (IoT). However, current approaches are limited by various factors such as network latency, bandwidth, energy consumption, task characteristics, and edge server overload. To address these limitations, this research propose a novel approach that integrates Deep Reinforcement Learning (DRL) with Deep Deterministic Policy Gradient (DDPG) and Markov Decision Problem for task offloading in MEC. Among DRL algorithms, the ITODDPG algorithm based on the DDPG algorithm and MDP is a popular choice for task offloading in MEC. Firstly, the ITODDPG algorithm formulates the task offloading problem in MEC as an MDP, which enables the agent to learn a policy that maximizes the expected cumulative reward. Secondly, ITODDPG employs a deep neural network to approximate the Q-function, which maps the state-action pairs to their expected cumulative rewards. Finally, the experimental results demonstrate that the ITODDPG algorithm outperforms the baseline algorithms regarding average compensation and convergence speed. In addition to its superior performance, our proposed approach can learn complex non-linear policies using DNN and an information-theoretic objective function to improve the performance of task offloading in MEC. Compared to traditional methods, our approach delivers improved performance, making it highly effective for developing IoT environments. Experimental trials were carried out, and the results indicate that the suggested approach can enhance performance compared to the other three baseline methods. It is highly scalable, capable of handling large and complex environments, and suitable for deployment in real-world scenarios, ensuring its widespread applicability to a diverse range of task offloading and MEC applications.

移动边缘计算中任务卸载的深度强化学习和马尔可夫决策问题
移动边缘计算(MEC)为移动用户提供类似云的功能,使其成为推进物联网(IoT)的一种新兴方法。然而,当前的方法受到各种因素的限制,例如网络延迟、带宽、能耗、任务特征和边缘服务器过载。为了解决这些限制,本研究提出了一种将深度强化学习(DRL)与深度确定性策略梯度(DDPG)和马尔可夫决策问题相结合的新方法,用于MEC中的任务卸载。在DRL算法中,基于DDPG算法和MDP的ITODDPG算法是MEC中任务卸载的热门选择。首先,ITODDPG算法将MEC中的任务卸载问题表述为一个MDP,使agent能够学习到期望累积奖励最大化的策略。其次,ITODDPG使用深度神经网络来近似q函数,将状态-动作对映射到它们的期望累积奖励。最后,实验结果表明,ITODDPG算法在平均补偿和收敛速度方面优于基准算法。除了其优越的性能外,我们提出的方法可以使用深度神经网络和信息论目标函数来学习复杂的非线性策略,以提高MEC中任务卸载的性能。与传统方法相比,我们的方法提供了更高的性能,使其在开发物联网环境方面非常有效。实验结果表明,与其他三种基线方法相比,该方法可以提高性能。它具有高度可扩展性,能够处理大型复杂环境,适合在实际场景中部署,确保其广泛适用于各种任务卸载和MEC应用程序。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Grid Computing
Journal of Grid Computing COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, THEORY & METHODS
CiteScore
8.70
自引率
9.10%
发文量
34
审稿时长
>12 weeks
期刊介绍: Grid Computing is an emerging technology that enables large-scale resource sharing and coordinated problem solving within distributed, often loosely coordinated groups-what are sometimes termed "virtual organizations. By providing scalable, secure, high-performance mechanisms for discovering and negotiating access to remote resources, Grid technologies promise to make it possible for scientific collaborations to share resources on an unprecedented scale, and for geographically distributed groups to work together in ways that were previously impossible. Similar technologies are being adopted within industry, where they serve as important building blocks for emerging service provider infrastructures. Even though the advantages of this technology for classes of applications have been acknowledged, research in a variety of disciplines, including not only multiple domains of computer science (networking, middleware, programming, algorithms) but also application disciplines themselves, as well as such areas as sociology and economics, is needed to broaden the applicability and scope of the current body of knowledge.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信