{"title":"Distributed deep reinforcement learning for independent task offloading in Mobile Edge Computing","authors":"Mohsen Darchini-Tabrizi , Amirhossein Roudgar , Reza Entezari-Maleki , Leonel Sousa","doi":"10.1016/j.jnca.2025.104211","DOIUrl":null,"url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) has been identified as an innovative paradigm to improve the performance and efficiency of mobile applications by offloading computation-intensive tasks to nearby edge servers. However, the effective implementation of task offloading in MEC systems faces challenges due to uncertainty, heterogeneity, and dynamicity. Deep Reinforcement Learning (DRL) provides a powerful approach for devising optimal task offloading policies in complex and uncertain environments. This paper presents a DRL-based task offloading approach using Deep Deterministic Policy Gradient (DDPG) and Distributed Distributional Deep Deterministic Policy Gradient (D4PG) algorithms. The proposed solution establishes a distributed system, where multiple mobile devices act as Reinforcement Learning (RL) agents to optimize their individual performance. To reduce the computational complexity of the neural networks, Gated Recurrent Units (GRU) are used instead of Long Short-Term Memory (LSTM) units to predict the load of edge nodes within the observed state. In addition, a GRU-based sequencing model is introduced to estimate task sizes in specific scenarios where these sizes are unknown. Finally, a novel scheduling algorithm is proposed that outperforms commonly used approaches by leveraging the estimated task sizes to achieve superior performance. Comprehensive simulations were conducted to evaluate the efficacy of the proposed approach, benchmarking it against multiple baseline and state-of-the-art algorithms. Results show significant improvements in terms of average processing delay and task drop rates, thereby confirming the success of the proposed approach.</div></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"240 ","pages":"Article 104211"},"PeriodicalIF":8.0000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Network and Computer Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1084804525001080","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
Mobile Edge Computing (MEC) has been identified as an innovative paradigm to improve the performance and efficiency of mobile applications by offloading computation-intensive tasks to nearby edge servers. However, the effective implementation of task offloading in MEC systems faces challenges due to uncertainty, heterogeneity, and dynamicity. Deep Reinforcement Learning (DRL) provides a powerful approach for devising optimal task offloading policies in complex and uncertain environments. This paper presents a DRL-based task offloading approach using Deep Deterministic Policy Gradient (DDPG) and Distributed Distributional Deep Deterministic Policy Gradient (D4PG) algorithms. The proposed solution establishes a distributed system, where multiple mobile devices act as Reinforcement Learning (RL) agents to optimize their individual performance. To reduce the computational complexity of the neural networks, Gated Recurrent Units (GRU) are used instead of Long Short-Term Memory (LSTM) units to predict the load of edge nodes within the observed state. In addition, a GRU-based sequencing model is introduced to estimate task sizes in specific scenarios where these sizes are unknown. Finally, a novel scheduling algorithm is proposed that outperforms commonly used approaches by leveraging the estimated task sizes to achieve superior performance. Comprehensive simulations were conducted to evaluate the efficacy of the proposed approach, benchmarking it against multiple baseline and state-of-the-art algorithms. Results show significant improvements in terms of average processing delay and task drop rates, thereby confirming the success of the proposed approach.
期刊介绍:
The Journal of Network and Computer Applications welcomes research contributions, surveys, and notes in all areas relating to computer networks and applications thereof. Sample topics include new design techniques, interesting or novel applications, components or standards; computer networks with tools such as WWW; emerging standards for internet protocols; Wireless networks; Mobile Computing; emerging computing models such as cloud computing, grid computing; applications of networked systems for remote collaboration and telemedicine, etc. The journal is abstracted and indexed in Scopus, Engineering Index, Web of Science, Science Citation Index Expanded and INSPEC.