C.N. Vanitha , P. Anusuya , Rajesh Kumar Dhanaraj , Dragan Pamucar , Mahmoud Ahmad Al-Khasawneh
{"title":"Proximal Policy Optimization-based Task Offloading Framework for Smart Disaster Monitoring using UAV-assisted WSNs","authors":"C.N. Vanitha , P. Anusuya , Rajesh Kumar Dhanaraj , Dragan Pamucar , Mahmoud Ahmad Al-Khasawneh","doi":"10.1016/j.mex.2025.103472","DOIUrl":null,"url":null,"abstract":"<div><div>Unmanned Aerial Vehicles (UAVs) are increasingly employed in Wireless Sensor Networks (WSNs) to enhance communication, coverage, and energy efficiency, particularly in disaster monitoring and remote surveillance scenarios. However, challenges such as limited energy resources, dynamic task allocation, and UAV trajectory optimization remain critical. This paper presents Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs (ETORL-UAV), a novel framework that integrates Proximal Policy Optimization (PPO) based reinforcement learning to intelligently manage UAV-assisted operations in edge-enabled WSNs. The proposed approach utilizes a multi-objective reward model to adaptively balance energy consumption, task success rate, and network lifetime. Extensive simulation results demonstrate that ETORL-UAV outperforms five state-of-the-art methods Meta-RL, g-MAPPO, Backscatter Optimization, Hierarchical Optimization, and Game Theory based Pricing achieving up to 9.3 % higher task offloading success, 18.75 % improvement in network lifetime, and 27 % reduction in energy consumption. These results validate the framework's scalability, reliability, and practical applicability for real-world disaster-response WSN deployments.<ul><li><span>•</span><span><div>Proposes ETORL-UAV: Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs</div></span></li><li><span>•</span><span><div>Leverages PPO-based reinforcement learning and a multi-objective reward model</div></span></li><li><span>•</span><span><div>Demonstrates superior performance over five benchmark approaches in disaster-response simulations</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103472"},"PeriodicalIF":1.6000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MethodsX","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2215016125003176","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly employed in Wireless Sensor Networks (WSNs) to enhance communication, coverage, and energy efficiency, particularly in disaster monitoring and remote surveillance scenarios. However, challenges such as limited energy resources, dynamic task allocation, and UAV trajectory optimization remain critical. This paper presents Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs (ETORL-UAV), a novel framework that integrates Proximal Policy Optimization (PPO) based reinforcement learning to intelligently manage UAV-assisted operations in edge-enabled WSNs. The proposed approach utilizes a multi-objective reward model to adaptively balance energy consumption, task success rate, and network lifetime. Extensive simulation results demonstrate that ETORL-UAV outperforms five state-of-the-art methods Meta-RL, g-MAPPO, Backscatter Optimization, Hierarchical Optimization, and Game Theory based Pricing achieving up to 9.3 % higher task offloading success, 18.75 % improvement in network lifetime, and 27 % reduction in energy consumption. These results validate the framework's scalability, reliability, and practical applicability for real-world disaster-response WSN deployments.
•
Proposes ETORL-UAV: Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs
•
Leverages PPO-based reinforcement learning and a multi-objective reward model
•
Demonstrates superior performance over five benchmark approaches in disaster-response simulations