{"title":"利用深度强化学习优化多无人机网络的能量和信息年龄增强","authors":"Jeena Kim, Seunghyun Park, Hyunhee Park","doi":"10.1049/ell2.70063","DOIUrl":null,"url":null,"abstract":"<p>This letter introduces an innovative approach for minimizing energy consumption in multi-unmanned aerial vehicles (multi-UAV) networks using deep reinforcement learning, with a focus on optimizing the age of information (AoI) in disaster environments. A hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption is proposed. By formulating the inter-UAV network path planning problem as a Markov decision process, a deep Q-network (DQN) strategy is applied to enable real-time decision making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. The extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25% in rural settings and 74.20% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.</p>","PeriodicalId":11556,"journal":{"name":"Electronics Letters","volume":"60 20","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70063","citationCount":"0","resultStr":"{\"title\":\"Energy optimization and age of information enhancement in multi-UAV networks using deep reinforcement learning\",\"authors\":\"Jeena Kim, Seunghyun Park, Hyunhee Park\",\"doi\":\"10.1049/ell2.70063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This letter introduces an innovative approach for minimizing energy consumption in multi-unmanned aerial vehicles (multi-UAV) networks using deep reinforcement learning, with a focus on optimizing the age of information (AoI) in disaster environments. A hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption is proposed. By formulating the inter-UAV network path planning problem as a Markov decision process, a deep Q-network (DQN) strategy is applied to enable real-time decision making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. The extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25% in rural settings and 74.20% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.</p>\",\"PeriodicalId\":11556,\"journal\":{\"name\":\"Electronics Letters\",\"volume\":\"60 20\",\"pages\":\"\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70063\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Electronics Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ell2.70063\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electronics Letters","FirstCategoryId":"5","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ell2.70063","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Energy optimization and age of information enhancement in multi-UAV networks using deep reinforcement learning
This letter introduces an innovative approach for minimizing energy consumption in multi-unmanned aerial vehicles (multi-UAV) networks using deep reinforcement learning, with a focus on optimizing the age of information (AoI) in disaster environments. A hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption is proposed. By formulating the inter-UAV network path planning problem as a Markov decision process, a deep Q-network (DQN) strategy is applied to enable real-time decision making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. The extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25% in rural settings and 74.20% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.
期刊介绍:
Electronics Letters is an internationally renowned peer-reviewed rapid-communication journal that publishes short original research papers every two weeks. Its broad and interdisciplinary scope covers the latest developments in all electronic engineering related fields including communication, biomedical, optical and device technologies. Electronics Letters also provides further insight into some of the latest developments through special features and interviews.
Scope
As a journal at the forefront of its field, Electronics Letters publishes papers covering all themes of electronic and electrical engineering. The major themes of the journal are listed below.
Antennas and Propagation
Biomedical and Bioinspired Technologies, Signal Processing and Applications
Control Engineering
Electromagnetism: Theory, Materials and Devices
Electronic Circuits and Systems
Image, Video and Vision Processing and Applications
Information, Computing and Communications
Instrumentation and Measurement
Microwave Technology
Optical Communications
Photonics and Opto-Electronics
Power Electronics, Energy and Sustainability
Radar, Sonar and Navigation
Semiconductor Technology
Signal Processing
MIMO