Airborne sensor fusion: Expected accuracy and behavior of a concurrent adjustment

Kyriaki Mouzakidou, Aurélien Brun, Davide A. Cucci, Jan Skaloud
{"title":"Airborne sensor fusion: Expected accuracy and behavior of a concurrent adjustment","authors":"Kyriaki Mouzakidou,&nbsp;Aurélien Brun,&nbsp;Davide A. Cucci,&nbsp;Jan Skaloud","doi":"10.1016/j.ophoto.2023.100057","DOIUrl":null,"url":null,"abstract":"<div><p><em>Tightly-coupled</em> sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination with small inertial sensors. This is particularly beneficial in kinematic laser scanning on lightweight aerial platforms, such as drones, which employ direct sensor orientation for the spatial interpretation of laser vectors. In this study, previously reported preliminary results are extended to assess the gain in accuracy of sensor orientation through leveraging all available spatio-temporal constraints in a dynamic network i) with a commercial IMU for drones and ii) with simultaneous processing of raw-observations of several low-quality IMUs. Additionally, we evaluate the influence of different types of spatial constraints (image 2D and point-cloud 3D tie-points) and flight geometries (with and without a cross flight line). We present the newly implemented estimation of confidence levels and compare those with the observed residual errors. The empirical evidence demonstrates that the use of spatial constraints increases the attitude accuracy of the derived trajectory by a factor of 2–3, both for the commercial and low-quality IMUs, while at the same time reducing the dispersion of geo-referencing errors, resulting in a considerably more precise and self-coherent geo-referenced point-cloud. We further demonstrate that the use of image constraints (additionally to lidar constraints) stabilizes the in-flight lidar boresight estimation by a factor of 3–10, establishing the feasibility of such estimation even in the absence of special calibration patterns or calibration targets.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"12 ","pages":"Article 100057"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393223000285/pdfft?md5=0f7ab041b690c142ba3b35d6019ecf11&pid=1-s2.0-S2667393223000285-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Open Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667393223000285","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Tightly-coupled sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination with small inertial sensors. This is particularly beneficial in kinematic laser scanning on lightweight aerial platforms, such as drones, which employ direct sensor orientation for the spatial interpretation of laser vectors. In this study, previously reported preliminary results are extended to assess the gain in accuracy of sensor orientation through leveraging all available spatio-temporal constraints in a dynamic network i) with a commercial IMU for drones and ii) with simultaneous processing of raw-observations of several low-quality IMUs. Additionally, we evaluate the influence of different types of spatial constraints (image 2D and point-cloud 3D tie-points) and flight geometries (with and without a cross flight line). We present the newly implemented estimation of confidence levels and compare those with the observed residual errors. The empirical evidence demonstrates that the use of spatial constraints increases the attitude accuracy of the derived trajectory by a factor of 2–3, both for the commercial and low-quality IMUs, while at the same time reducing the dispersion of geo-referencing errors, resulting in a considerably more precise and self-coherent geo-referenced point-cloud. We further demonstrate that the use of image constraints (additionally to lidar constraints) stabilizes the in-flight lidar boresight estimation by a factor of 3–10, establishing the feasibility of such estimation even in the absence of special calibration patterns or calibration targets.

机载传感器融合:并发调整的预期精度和行为
紧密耦合的传感器定向,即在共同调整中同时处理时间(全球导航卫星系统和原始惯性)和空间(图像和激光雷达)约束,已证明可显著提高小型惯性传感器的姿态确定质量。这对轻型航空平台(如无人机)上的运动学激光扫描尤其有益,因为无人机采用直接传感器定位来解释激光矢量的空间。在本研究中,我们对之前报告的初步结果进行了扩展,评估了通过利用动态网络中所有可用的时空约束条件(i)和商用无人机 IMU,以及(ii)同时处理多个低质量 IMU 的原始观测数据,传感器定位精度的提高情况。此外,我们还评估了不同类型空间约束(图像二维和点云三维连接点)和飞行几何(有交叉飞行线和无交叉飞行线)的影响。我们介绍了新实施的置信度估计,并将其与观测到的残余误差进行了比较。经验证据表明,空间约束的使用将推导轨迹的姿态精度提高了 2-3 倍,无论是对于商用还是低质量 IMU,同时还降低了地理参照误差的分散性,使地理参照点云更加精确和自洽。我们进一步证明,使用图像约束(激光雷达约束之外)可将飞行中激光雷达孔距估算的稳定性提高 3-10 倍,从而确立了即使在没有特殊校准模式或校准目标的情况下进行此类估算的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信