Proximal Policy Optimization-based Task Offloading Framework for Smart Disaster Monitoring using UAV-assisted WSNs

IF 1.6 Q2 MULTIDISCIPLINARY SCIENCES
MethodsX Pub Date : 2025-06-26 DOI:10.1016/j.mex.2025.103472
C.N. Vanitha , P. Anusuya , Rajesh Kumar Dhanaraj , Dragan Pamucar , Mahmoud Ahmad Al-Khasawneh
{"title":"Proximal Policy Optimization-based Task Offloading Framework for Smart Disaster Monitoring using UAV-assisted WSNs","authors":"C.N. Vanitha ,&nbsp;P. Anusuya ,&nbsp;Rajesh Kumar Dhanaraj ,&nbsp;Dragan Pamucar ,&nbsp;Mahmoud Ahmad Al-Khasawneh","doi":"10.1016/j.mex.2025.103472","DOIUrl":null,"url":null,"abstract":"<div><div>Unmanned Aerial Vehicles (UAVs) are increasingly employed in Wireless Sensor Networks (WSNs) to enhance communication, coverage, and energy efficiency, particularly in disaster monitoring and remote surveillance scenarios. However, challenges such as limited energy resources, dynamic task allocation, and UAV trajectory optimization remain critical. This paper presents Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs (ETORL-UAV), a novel framework that integrates Proximal Policy Optimization (PPO) based reinforcement learning to intelligently manage UAV-assisted operations in edge-enabled WSNs. The proposed approach utilizes a multi-objective reward model to adaptively balance energy consumption, task success rate, and network lifetime. Extensive simulation results demonstrate that ETORL-UAV outperforms five state-of-the-art methods Meta-RL, g-MAPPO, Backscatter Optimization, Hierarchical Optimization, and Game Theory based Pricing achieving up to 9.3 % higher task offloading success, 18.75 % improvement in network lifetime, and 27 % reduction in energy consumption. These results validate the framework's scalability, reliability, and practical applicability for real-world disaster-response WSN deployments.<ul><li><span>•</span><span><div>Proposes ETORL-UAV: Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs</div></span></li><li><span>•</span><span><div>Leverages PPO-based reinforcement learning and a multi-objective reward model</div></span></li><li><span>•</span><span><div>Demonstrates superior performance over five benchmark approaches in disaster-response simulations</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103472"},"PeriodicalIF":1.6000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MethodsX","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2215016125003176","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Unmanned Aerial Vehicles (UAVs) are increasingly employed in Wireless Sensor Networks (WSNs) to enhance communication, coverage, and energy efficiency, particularly in disaster monitoring and remote surveillance scenarios. However, challenges such as limited energy resources, dynamic task allocation, and UAV trajectory optimization remain critical. This paper presents Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs (ETORL-UAV), a novel framework that integrates Proximal Policy Optimization (PPO) based reinforcement learning to intelligently manage UAV-assisted operations in edge-enabled WSNs. The proposed approach utilizes a multi-objective reward model to adaptively balance energy consumption, task success rate, and network lifetime. Extensive simulation results demonstrate that ETORL-UAV outperforms five state-of-the-art methods Meta-RL, g-MAPPO, Backscatter Optimization, Hierarchical Optimization, and Game Theory based Pricing achieving up to 9.3 % higher task offloading success, 18.75 % improvement in network lifetime, and 27 % reduction in energy consumption. These results validate the framework's scalability, reliability, and practical applicability for real-world disaster-response WSN deployments.
  • Proposes ETORL-UAV: Energy-efficient Task Offloading using Reinforcement Learning for UAV-assisted WSNs
  • Leverages PPO-based reinforcement learning and a multi-objective reward model
  • Demonstrates superior performance over five benchmark approaches in disaster-response simulations

Abstract Image

基于近端策略优化的无人机辅助wsn智能灾害监测任务卸载框架
无人机(uav)越来越多地应用于无线传感器网络(wsn),以增强通信、覆盖和能源效率,特别是在灾害监测和远程监视场景中。然而,诸如有限的能源资源、动态任务分配和无人机轨迹优化等挑战仍然是关键。本文提出了一种基于强化学习的无人机辅助WSNs节能任务卸载(ETORL-UAV)框架,该框架集成了基于近端策略优化(PPO)的强化学习,以智能地管理无人机辅助的边缘WSNs操作。该方法利用多目标奖励模型自适应平衡能量消耗、任务成功率和网络寿命。广泛的仿真结果表明,ETORL-UAV优于五种最先进的方法Meta-RL、g-MAPPO、反向散射优化、分层优化和基于博弈论的定价,实现了高达9.3%的任务卸载成功率,网络寿命提高了18.75%,能耗降低了27%。这些结果验证了框架的可伸缩性、可靠性和实际应用于现实世界的灾难响应WSN部署。•提出ETORL-UAV:在无人机辅助WSNs中使用强化学习的节能任务卸载•利用基于ppo的强化学习和多目标奖励模型•在灾难响应模拟中展示了超过五种基准方法的卓越性能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
MethodsX
MethodsX Health Professions-Medical Laboratory Technology
CiteScore
3.60
自引率
5.30%
发文量
314
审稿时长
7 weeks
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信