{"title":"Performance enhancement of UAV-enabled MEC systems through intelligent task offloading and resource allocation","authors":"Mohsen Darchini-Tabrizi , Amirali Pakdaman-Donyavi , Reza Entezari-Maleki , Leonel Sousa","doi":"10.1016/j.comnet.2025.111280","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid and continuous growth of the Internet of Things has led to a rising demand for processing and storage solutions that can overcome the limitations of user devices. While cloud computing provides scalability, it often introduces considerable delay. To address this challenge, a computing paradigm that brings resources closer to user devices is preferable. Unmanned Aerial Vehicles (UAVs) have emerged as an effective solution to enhance communication quality and coverage in wireless systems, particularly in specific conditions. This paper presents a UAV-enabled Mobile Edge Computing (MEC) system, where UAVs equipped with computational capabilities provide task offloading services to users. In this system, users process part of their computing tasks locally, while offloading the remaining tasks to UAVs for processing. The primary goal of the proposed algorithm is to minimize processing delays, taking into account the environmental and energy constraints of the UAVs, such as movement boundaries, link blockages, transmission delays, and battery consumption for computation and flight. To achieve this, we propose a Deep Reinforcement Learning algorithm based on the Rainbow Deep Q-Network. This algorithm explores the dynamic and stochastic environment of MEC to deploy an appropriate computation offloading policy. The extensive experiments conducted in this study demonstrate the superior performance of the proposed method. The results show fast convergence to an optimal value and an average improvement of 12.63 percent in delay compared to state-of-the-art methods.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"264 ","pages":"Article 111280"},"PeriodicalIF":4.4000,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625002488","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
The rapid and continuous growth of the Internet of Things has led to a rising demand for processing and storage solutions that can overcome the limitations of user devices. While cloud computing provides scalability, it often introduces considerable delay. To address this challenge, a computing paradigm that brings resources closer to user devices is preferable. Unmanned Aerial Vehicles (UAVs) have emerged as an effective solution to enhance communication quality and coverage in wireless systems, particularly in specific conditions. This paper presents a UAV-enabled Mobile Edge Computing (MEC) system, where UAVs equipped with computational capabilities provide task offloading services to users. In this system, users process part of their computing tasks locally, while offloading the remaining tasks to UAVs for processing. The primary goal of the proposed algorithm is to minimize processing delays, taking into account the environmental and energy constraints of the UAVs, such as movement boundaries, link blockages, transmission delays, and battery consumption for computation and flight. To achieve this, we propose a Deep Reinforcement Learning algorithm based on the Rainbow Deep Q-Network. This algorithm explores the dynamic and stochastic environment of MEC to deploy an appropriate computation offloading policy. The extensive experiments conducted in this study demonstrate the superior performance of the proposed method. The results show fast convergence to an optimal value and an average improvement of 12.63 percent in delay compared to state-of-the-art methods.
物联网的快速和持续增长导致对能够克服用户设备限制的处理和存储解决方案的需求不断增长。虽然云计算提供了可伸缩性,但它通常会带来相当大的延迟。为了应对这一挑战,一种使资源更接近用户设备的计算范式是可取的。无人驾驶飞行器(uav)已经成为提高无线系统通信质量和覆盖范围的有效解决方案,特别是在特定条件下。本文提出了一种支持无人机的移动边缘计算(MEC)系统,其中配备计算能力的无人机为用户提供任务卸载服务。在该系统中,用户在本地处理部分计算任务,而将剩余任务卸载给无人机处理。该算法的主要目标是最小化处理延迟,同时考虑到无人机的环境和能量约束,如运动边界、链路阻塞、传输延迟以及计算和飞行的电池消耗。为了实现这一点,我们提出了一种基于Rainbow Deep Q-Network的深度强化学习算法。该算法探索了MEC的动态和随机环境,以部署适当的计算卸载策略。在本研究中进行的大量实验证明了该方法的优越性能。结果表明,与现有方法相比,该方法收敛速度快,延迟平均提高12.63%。
期刊介绍:
Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.