Expediting the Convergence of Global Localization of UAVs through Forward-Facing Camera Observation

Drones Pub Date : 2024-07-19 DOI:10.3390/drones8070335
Zhenyu Li, Xiangyuan Jiang, Sile Ma, Xiaojing Ma, Zhenyi Lv, Hongliang Ding, Haiyan Ji, Zheng Sun
{"title":"Expediting the Convergence of Global Localization of UAVs through Forward-Facing Camera Observation","authors":"Zhenyu Li, Xiangyuan Jiang, Sile Ma, Xiaojing Ma, Zhenyi Lv, Hongliang Ding, Haiyan Ji, Zheng Sun","doi":"10.3390/drones8070335","DOIUrl":null,"url":null,"abstract":"In scenarios where the global navigation satellite system is unavailable, unmanned aerial vehicles (UAVs) can employ visual algorithms to process aerial images. These images are integrated with satellite maps and digital elevation models (DEMs) to achieve global localization. To address the localization challenge in unfamiliar areas devoid of prior data, an iterative computation-based localization framework is commonly used. This framework iteratively refines its calculations using multiple observations from a downward-facing camera to determine an accurate global location. To improve the rate of convergence for localization, we introduced an innovative observation model. We derived a terrain descriptor from the images captured by a forward-facing camera and integrated it as supplementary observation into a point-mass filter (PMF) framework to enhance the confidence of the observation likelihood distribution. Furthermore, within this framework, the methods for the truncation of the convolution kernel and that of the probability distribution were developed, thereby enhancing the computational efficiency and convergence rate, respectively. The performance of the algorithm was evaluated using real UAV flight sequences, a satellite map, and a DEM in an area measuring 7.7 km × 8 km. The results demonstrate that this method significantly accelerates the localization convergence during both takeoff and ascent phases as well as during cruise flight. Additionally, it increases localization accuracy and robustness in complex environments, such as areas with uneven terrain and ambiguous scenes. The method is applicable to the localization of UAVs in large-scale unknown scenarios, thereby enhancing the flight safety and mission execution capabilities of UAVs.","PeriodicalId":507567,"journal":{"name":"Drones","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Drones","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/drones8070335","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In scenarios where the global navigation satellite system is unavailable, unmanned aerial vehicles (UAVs) can employ visual algorithms to process aerial images. These images are integrated with satellite maps and digital elevation models (DEMs) to achieve global localization. To address the localization challenge in unfamiliar areas devoid of prior data, an iterative computation-based localization framework is commonly used. This framework iteratively refines its calculations using multiple observations from a downward-facing camera to determine an accurate global location. To improve the rate of convergence for localization, we introduced an innovative observation model. We derived a terrain descriptor from the images captured by a forward-facing camera and integrated it as supplementary observation into a point-mass filter (PMF) framework to enhance the confidence of the observation likelihood distribution. Furthermore, within this framework, the methods for the truncation of the convolution kernel and that of the probability distribution were developed, thereby enhancing the computational efficiency and convergence rate, respectively. The performance of the algorithm was evaluated using real UAV flight sequences, a satellite map, and a DEM in an area measuring 7.7 km × 8 km. The results demonstrate that this method significantly accelerates the localization convergence during both takeoff and ascent phases as well as during cruise flight. Additionally, it increases localization accuracy and robustness in complex environments, such as areas with uneven terrain and ambiguous scenes. The method is applicable to the localization of UAVs in large-scale unknown scenarios, thereby enhancing the flight safety and mission execution capabilities of UAVs.
通过前向摄像头观测加速无人机全球定位的融合
在没有全球导航卫星系统的情况下,无人驾驶飞行器(UAV)可采用视觉算法处理航空图像。这些图像与卫星地图和数字高程模型(DEM)相结合,可实现全球定位。为了应对在缺乏先验数据的陌生区域进行定位的挑战,通常采用基于迭代计算的定位框架。该框架利用从俯视摄像头获取的多个观测数据迭代改进计算,以确定准确的全球位置。为了提高定位的收敛速度,我们引入了一个创新的观测模型。我们从前向摄像头拍摄的图像中提取了一个地形描述符,并将其作为补充观测数据纳入点-质滤波器(PMF)框架,以提高观测数据似然分布的置信度。此外,在此框架内还开发了卷积核截断方法和概率分布截断方法,从而分别提高了计算效率和收敛速度。利用真实的无人机飞行序列、卫星地图和 7.7 km × 8 km 区域内的 DEM 评估了该算法的性能。结果表明,无论是在起飞和上升阶段,还是在巡航飞行期间,该方法都大大加快了定位收敛速度。此外,它还提高了在复杂环境中的定位精度和鲁棒性,如地形不平坦和场景模糊的区域。该方法适用于大规模未知场景中的无人机定位,从而提高无人机的飞行安全性和任务执行能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信