基于集成强化学习算法的无人机惯性导航技术研究

Q4 Engineering
{"title":"基于集成强化学习算法的无人机惯性导航技术研究","authors":"","doi":"10.33140/jeee.02.03.06","DOIUrl":null,"url":null,"abstract":"With the continuous expansion of unmanned aerial vehicle (UAV) applications, traditional inertial navigation technology exhibits significant limitations in complex environments. In this study, we integrate improved reinforcement learning (RL) algorithms to enhance existing unmanned aerial vehicle inertial navigation technology and introduce a modulated mechanism (MM) for adjusting the state of the intelligent agent in an innovative manner [1,2]. Through interaction with the environment, the intelligent machine can learn more effective navigation strategies [3]. The ultimate goal is to provide a foundation for autonomous navigation of unmanned aerial vehicles during flight and improve navigation accuracy and robustness. We first define appropriate state representation and action space, and then design an adjustment mechanism based on the actions selected by the intelligent agent. The adjustment mechanism outputs the next state and reward value of the agent. Additionally, the adjustment mechanism calculates the error between the adjusted state and the unadjusted state. Furthermore, the intelligent agent stores the acquired experience samples containing states and reward values in a buffer and replays the experiences during each iteration to learn the dynamic characteristics of the environment. We name the improved algorithm as the DQM algorithm. Experimental results demonstrate that the intelligent agent using our proposed algorithm effectively reduces the accumulated errors of inertial navigation in dynamic environments. Although our research provides a basis for achieving autonomous navigation of unmanned aerial vehicles, there is still room for significant optimization. Further research can include testing unmanned aerial vehicles in simulated environments, testing unmanned aerial vehicles in realworld environments, optimizing the design of reward functions, improving the algorithm workflow to enhance convergence speed and performance, and enhancing the algorithm's generalization ability. It has been proven that by integrating reinforcement learning algorithms, unmanned aerial vehicles can achieve autonomous navigation, thereby improving navigation accuracy and robustness in dynamic and changing environments [4]. Therefore, this research plays an important role in promoting the development and application of unmanned aerial vehicle technology.","PeriodicalId":39047,"journal":{"name":"Journal of Electrical and Electronics Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Research on Inertial Navigation Technology of Unmanned Aerial Vehicles with Integrated Reinforcement Learning Algorithm\",\"authors\":\"\",\"doi\":\"10.33140/jeee.02.03.06\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the continuous expansion of unmanned aerial vehicle (UAV) applications, traditional inertial navigation technology exhibits significant limitations in complex environments. In this study, we integrate improved reinforcement learning (RL) algorithms to enhance existing unmanned aerial vehicle inertial navigation technology and introduce a modulated mechanism (MM) for adjusting the state of the intelligent agent in an innovative manner [1,2]. Through interaction with the environment, the intelligent machine can learn more effective navigation strategies [3]. The ultimate goal is to provide a foundation for autonomous navigation of unmanned aerial vehicles during flight and improve navigation accuracy and robustness. We first define appropriate state representation and action space, and then design an adjustment mechanism based on the actions selected by the intelligent agent. The adjustment mechanism outputs the next state and reward value of the agent. Additionally, the adjustment mechanism calculates the error between the adjusted state and the unadjusted state. Furthermore, the intelligent agent stores the acquired experience samples containing states and reward values in a buffer and replays the experiences during each iteration to learn the dynamic characteristics of the environment. We name the improved algorithm as the DQM algorithm. Experimental results demonstrate that the intelligent agent using our proposed algorithm effectively reduces the accumulated errors of inertial navigation in dynamic environments. Although our research provides a basis for achieving autonomous navigation of unmanned aerial vehicles, there is still room for significant optimization. Further research can include testing unmanned aerial vehicles in simulated environments, testing unmanned aerial vehicles in realworld environments, optimizing the design of reward functions, improving the algorithm workflow to enhance convergence speed and performance, and enhancing the algorithm's generalization ability. It has been proven that by integrating reinforcement learning algorithms, unmanned aerial vehicles can achieve autonomous navigation, thereby improving navigation accuracy and robustness in dynamic and changing environments [4]. Therefore, this research plays an important role in promoting the development and application of unmanned aerial vehicle technology.\",\"PeriodicalId\":39047,\"journal\":{\"name\":\"Journal of Electrical and Electronics Engineering\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Electrical and Electronics Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.33140/jeee.02.03.06\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Engineering\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Electrical and Electronics Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.33140/jeee.02.03.06","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

摘要

随着无人机应用的不断扩大,传统的惯性导航技术在复杂环境下表现出明显的局限性。在本研究中,我们集成了改进的强化学习(RL)算法来增强现有的无人机惯性导航技术,并以创新的方式引入了一种调制机制(MM)来调整智能体的状态[1,2]。通过与环境的交互,智能机器可以学习到更有效的导航策略[3]。最终目标是为无人机在飞行过程中的自主导航提供基础,提高导航精度和鲁棒性。首先定义合适的状态表示和动作空间,然后根据智能体选择的动作设计调整机制。调整机制输出agent的下一个状态和奖励值。此外,调整机构计算调整状态与未调整状态之间的误差。此外,智能代理将获得的包含状态和奖励值的经验样本存储在缓冲区中,并在每次迭代中重播经验以学习环境的动态特性。我们将改进后的算法命名为DQM算法。实验结果表明,采用该算法的智能体有效地降低了动态环境下惯性导航的累积误差。虽然我们的研究为实现无人机自主导航提供了基础,但仍有很大的优化空间。进一步的研究可以包括在模拟环境下对无人机进行测试,在真实环境下对无人机进行测试,优化奖励函数的设计,改进算法工作流程以提高收敛速度和性能,增强算法的泛化能力。事实证明,通过集成强化学习算法,无人机可以实现自主导航,从而提高在动态变化环境下的导航精度和鲁棒性[4]。因此,本研究对推动无人机技术的发展和应用具有重要作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Research on Inertial Navigation Technology of Unmanned Aerial Vehicles with Integrated Reinforcement Learning Algorithm
With the continuous expansion of unmanned aerial vehicle (UAV) applications, traditional inertial navigation technology exhibits significant limitations in complex environments. In this study, we integrate improved reinforcement learning (RL) algorithms to enhance existing unmanned aerial vehicle inertial navigation technology and introduce a modulated mechanism (MM) for adjusting the state of the intelligent agent in an innovative manner [1,2]. Through interaction with the environment, the intelligent machine can learn more effective navigation strategies [3]. The ultimate goal is to provide a foundation for autonomous navigation of unmanned aerial vehicles during flight and improve navigation accuracy and robustness. We first define appropriate state representation and action space, and then design an adjustment mechanism based on the actions selected by the intelligent agent. The adjustment mechanism outputs the next state and reward value of the agent. Additionally, the adjustment mechanism calculates the error between the adjusted state and the unadjusted state. Furthermore, the intelligent agent stores the acquired experience samples containing states and reward values in a buffer and replays the experiences during each iteration to learn the dynamic characteristics of the environment. We name the improved algorithm as the DQM algorithm. Experimental results demonstrate that the intelligent agent using our proposed algorithm effectively reduces the accumulated errors of inertial navigation in dynamic environments. Although our research provides a basis for achieving autonomous navigation of unmanned aerial vehicles, there is still room for significant optimization. Further research can include testing unmanned aerial vehicles in simulated environments, testing unmanned aerial vehicles in realworld environments, optimizing the design of reward functions, improving the algorithm workflow to enhance convergence speed and performance, and enhancing the algorithm's generalization ability. It has been proven that by integrating reinforcement learning algorithms, unmanned aerial vehicles can achieve autonomous navigation, thereby improving navigation accuracy and robustness in dynamic and changing environments [4]. Therefore, this research plays an important role in promoting the development and application of unmanned aerial vehicle technology.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Electrical and Electronics Engineering
Journal of Electrical and Electronics Engineering Engineering-Electrical and Electronic Engineering
CiteScore
0.90
自引率
0.00%
发文量
0
审稿时长
16 weeks
期刊介绍: Journal of Electrical and Electronics Engineering is a scientific interdisciplinary, application-oriented publication that offer to the researchers and to the PhD students the possibility to disseminate their novel and original scientific and research contributions in the field of electrical and electronics engineering. The articles are reviewed by professionals and the selection of the papers is based only on the quality of their content and following the next criteria: the papers presents the research results of the authors, the papers / the content of the papers have not been submitted or published elsewhere, the paper must be written in English, as well as the fact that the papers should include in the reference list papers already published in recent years in the Journal of Electrical and Electronics Engineering that present similar research results. The topics and instructions for authors of this journal can be found to the appropiate sections.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信