Distributed deep reinforcement learning for independent task offloading in Mobile Edge Computing

IF 8 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Mohsen Darchini-Tabrizi , Amirhossein Roudgar , Reza Entezari-Maleki , Leonel Sousa
{"title":"Distributed deep reinforcement learning for independent task offloading in Mobile Edge Computing","authors":"Mohsen Darchini-Tabrizi ,&nbsp;Amirhossein Roudgar ,&nbsp;Reza Entezari-Maleki ,&nbsp;Leonel Sousa","doi":"10.1016/j.jnca.2025.104211","DOIUrl":null,"url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) has been identified as an innovative paradigm to improve the performance and efficiency of mobile applications by offloading computation-intensive tasks to nearby edge servers. However, the effective implementation of task offloading in MEC systems faces challenges due to uncertainty, heterogeneity, and dynamicity. Deep Reinforcement Learning (DRL) provides a powerful approach for devising optimal task offloading policies in complex and uncertain environments. This paper presents a DRL-based task offloading approach using Deep Deterministic Policy Gradient (DDPG) and Distributed Distributional Deep Deterministic Policy Gradient (D4PG) algorithms. The proposed solution establishes a distributed system, where multiple mobile devices act as Reinforcement Learning (RL) agents to optimize their individual performance. To reduce the computational complexity of the neural networks, Gated Recurrent Units (GRU) are used instead of Long Short-Term Memory (LSTM) units to predict the load of edge nodes within the observed state. In addition, a GRU-based sequencing model is introduced to estimate task sizes in specific scenarios where these sizes are unknown. Finally, a novel scheduling algorithm is proposed that outperforms commonly used approaches by leveraging the estimated task sizes to achieve superior performance. Comprehensive simulations were conducted to evaluate the efficacy of the proposed approach, benchmarking it against multiple baseline and state-of-the-art algorithms. Results show significant improvements in terms of average processing delay and task drop rates, thereby confirming the success of the proposed approach.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"240 ","pages":"Article 104211"},"PeriodicalIF":8.0000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Network and Computer Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1084804525001080","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Mobile Edge Computing (MEC) has been identified as an innovative paradigm to improve the performance and efficiency of mobile applications by offloading computation-intensive tasks to nearby edge servers. However, the effective implementation of task offloading in MEC systems faces challenges due to uncertainty, heterogeneity, and dynamicity. Deep Reinforcement Learning (DRL) provides a powerful approach for devising optimal task offloading policies in complex and uncertain environments. This paper presents a DRL-based task offloading approach using Deep Deterministic Policy Gradient (DDPG) and Distributed Distributional Deep Deterministic Policy Gradient (D4PG) algorithms. The proposed solution establishes a distributed system, where multiple mobile devices act as Reinforcement Learning (RL) agents to optimize their individual performance. To reduce the computational complexity of the neural networks, Gated Recurrent Units (GRU) are used instead of Long Short-Term Memory (LSTM) units to predict the load of edge nodes within the observed state. In addition, a GRU-based sequencing model is introduced to estimate task sizes in specific scenarios where these sizes are unknown. Finally, a novel scheduling algorithm is proposed that outperforms commonly used approaches by leveraging the estimated task sizes to achieve superior performance. Comprehensive simulations were conducted to evaluate the efficacy of the proposed approach, benchmarking it against multiple baseline and state-of-the-art algorithms. Results show significant improvements in terms of average processing delay and task drop rates, thereby confirming the success of the proposed approach.
移动边缘计算中独立任务卸载的分布式深度强化学习
移动边缘计算(MEC)被认为是一种创新范例,通过将计算密集型任务卸载到附近的边缘服务器来提高移动应用程序的性能和效率。然而,MEC系统中任务卸载的有效实施面临着不确定性、异质性和动态性的挑战。深度强化学习(DRL)为在复杂和不确定的环境中设计最佳任务卸载策略提供了一种强大的方法。本文提出了一种基于深度确定性策略梯度(DDPG)和分布式分布式深度确定性策略梯度(D4PG)算法的基于drl的任务卸载方法。提出的解决方案建立了一个分布式系统,其中多个移动设备作为强化学习(RL)代理来优化其个人性能。为了降低神经网络的计算复杂度,采用门控循环单元(GRU)代替长短期记忆单元(LSTM)来预测观察状态下边缘节点的负载。此外,还引入了基于gru的排序模型,用于在未知的特定场景中估计任务大小。最后,提出了一种新的调度算法,该算法通过利用估计的任务大小来获得更好的性能。进行了全面的模拟以评估所提出方法的有效性,并将其与多个基线和最先进的算法进行基准测试。结果表明,在平均处理延迟和任务丢弃率方面有显著改善,从而证实了所提出方法的成功。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Network and Computer Applications
Journal of Network and Computer Applications 工程技术-计算机:跨学科应用
CiteScore
21.50
自引率
3.40%
发文量
142
审稿时长
37 days
期刊介绍: The Journal of Network and Computer Applications welcomes research contributions, surveys, and notes in all areas relating to computer networks and applications thereof. Sample topics include new design techniques, interesting or novel applications, components or standards; computer networks with tools such as WWW; emerging standards for internet protocols; Wireless networks; Mobile Computing; emerging computing models such as cloud computing, grid computing; applications of networked systems for remote collaboration and telemedicine, etc. The journal is abstracted and indexed in Scopus, Engineering Index, Web of Science, Science Citation Index Expanded and INSPEC.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信