利用深度强化学习优化多无人机网络的能量和信息年龄增强

IF 0.7 4区 工程技术 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC
Jeena Kim, Seunghyun Park, Hyunhee Park
{"title":"利用深度强化学习优化多无人机网络的能量和信息年龄增强","authors":"Jeena Kim,&nbsp;Seunghyun Park,&nbsp;Hyunhee Park","doi":"10.1049/ell2.70063","DOIUrl":null,"url":null,"abstract":"<p>This letter introduces an innovative approach for minimizing energy consumption in multi-unmanned aerial vehicles (multi-UAV) networks using deep reinforcement learning, with a focus on optimizing the age of information (AoI) in disaster environments. A hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption is proposed. By formulating the inter-UAV network path planning problem as a Markov decision process, a deep Q-network (DQN) strategy is applied to enable real-time decision making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. The extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25% in rural settings and 74.20% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.</p>","PeriodicalId":11556,"journal":{"name":"Electronics Letters","volume":"60 20","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70063","citationCount":"0","resultStr":"{\"title\":\"Energy optimization and age of information enhancement in multi-UAV networks using deep reinforcement learning\",\"authors\":\"Jeena Kim,&nbsp;Seunghyun Park,&nbsp;Hyunhee Park\",\"doi\":\"10.1049/ell2.70063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This letter introduces an innovative approach for minimizing energy consumption in multi-unmanned aerial vehicles (multi-UAV) networks using deep reinforcement learning, with a focus on optimizing the age of information (AoI) in disaster environments. A hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption is proposed. By formulating the inter-UAV network path planning problem as a Markov decision process, a deep Q-network (DQN) strategy is applied to enable real-time decision making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. The extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25% in rural settings and 74.20% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.</p>\",\"PeriodicalId\":11556,\"journal\":{\"name\":\"Electronics Letters\",\"volume\":\"60 20\",\"pages\":\"\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70063\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electronics Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ell2.70063\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronics Letters","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ell2.70063","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

这封信介绍了一种利用深度强化学习最大限度降低多无人机(multi-UAV)网络能耗的创新方法,重点是优化灾害环境中的信息时代(AoI)。本文提出了一种分层无人机部署策略,该策略有利于合作轨迹规划,确保及时收集和传输数据,同时最大限度地降低能耗。通过将无人飞行器之间的网络路径规划问题表述为马尔可夫决策过程,应用了深度 Q 网络(DQN)策略,以实现考虑到动态环境变化、障碍物和无人飞行器电池限制的实时决策。在农村和城市场景中进行的大量仿真结果表明,在 DQN 框架内采用内存访问方法非常有效,与非内存方法相比,农村环境中的能耗降低了 33.25%,城市环境中降低了 74.20%。通过将AoI考虑因素与高能效无人机控制相结合,这项工作提供了一种稳健的解决方案,可在地面通信基础设施受到破坏的情况下,在灾难响应等关键应用中保持新鲜数据。事实证明,重放记忆方法,特别是在线历史方法的使用,对于适应不断变化的条件以及优化无人机操作的数据新鲜度和能耗至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Energy optimization and age of information enhancement in multi-UAV networks using deep reinforcement learning

This letter introduces an innovative approach for minimizing energy consumption in multi-unmanned aerial vehicles (multi-UAV) networks using deep reinforcement learning, with a focus on optimizing the age of information (AoI) in disaster environments. A hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption is proposed. By formulating the inter-UAV network path planning problem as a Markov decision process, a deep Q-network (DQN) strategy is applied to enable real-time decision making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. The extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25% in rural settings and 74.20% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Electronics Letters
Electronics Letters 工程技术-工程:电子与电气
CiteScore
2.70
自引率
0.00%
发文量
268
审稿时长
3.6 months
期刊介绍: Electronics Letters is an internationally renowned peer-reviewed rapid-communication journal that publishes short original research papers every two weeks. Its broad and interdisciplinary scope covers the latest developments in all electronic engineering related fields including communication, biomedical, optical and device technologies. Electronics Letters also provides further insight into some of the latest developments through special features and interviews. Scope As a journal at the forefront of its field, Electronics Letters publishes papers covering all themes of electronic and electrical engineering. The major themes of the journal are listed below. Antennas and Propagation Biomedical and Bioinspired Technologies, Signal Processing and Applications Control Engineering Electromagnetism: Theory, Materials and Devices Electronic Circuits and Systems Image, Video and Vision Processing and Applications Information, Computing and Communications Instrumentation and Measurement Microwave Technology Optical Communications Photonics and Opto-Electronics Power Electronics, Energy and Sustainability Radar, Sonar and Navigation Semiconductor Technology Signal Processing MIMO
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信