Monocular visual-inertial state estimation on 3D large-scale scenes for UAVs navigation

J. Su, Xutao Li, Yunming Ye, Yan Li
{"title":"Monocular visual-inertial state estimation on 3D large-scale scenes for UAVs navigation","authors":"J. Su, Xutao Li, Yunming Ye, Yan Li","doi":"10.1109/SSRR.2017.8088162","DOIUrl":null,"url":null,"abstract":"Direct method for visual odometry has gained popularity, it needs not to compute feature descriptor and uses the actual values of camera sensors directly. Hence, it is very fast. However, its accuracy and consistency are not satisfactory. Based on these considerations, we propose a tightly-coupled, optimization-based method to fuse inertial measurement unit (IMU) and visual measurement, in which uses IMU preintegration to provide prior state for semi-direct method tracking and uses precise state estimation of visual odometry to optimizate IMU state estimation. Furthermore, we incorporate Kanade-Lucas-Tomasi tracking and a probabilistic depth filter such that the pixels in environments with little or high- frequency texture can be efficiently tracked. Our approach is able to obtain the gravity orientation in initial IMU body frame and the scale information by using the monocular camera and IMU. More importantly, we do not need any prior landmark points. Our monocular visual-inertial state estimation is much faster and achieves better accuracy on benchmark datasets.","PeriodicalId":403881,"journal":{"name":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSRR.2017.8088162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Direct method for visual odometry has gained popularity, it needs not to compute feature descriptor and uses the actual values of camera sensors directly. Hence, it is very fast. However, its accuracy and consistency are not satisfactory. Based on these considerations, we propose a tightly-coupled, optimization-based method to fuse inertial measurement unit (IMU) and visual measurement, in which uses IMU preintegration to provide prior state for semi-direct method tracking and uses precise state estimation of visual odometry to optimizate IMU state estimation. Furthermore, we incorporate Kanade-Lucas-Tomasi tracking and a probabilistic depth filter such that the pixels in environments with little or high- frequency texture can be efficiently tracked. Our approach is able to obtain the gravity orientation in initial IMU body frame and the scale information by using the monocular camera and IMU. More importantly, we do not need any prior landmark points. Our monocular visual-inertial state estimation is much faster and achieves better accuracy on benchmark datasets.
无人机导航三维大尺度场景单目视觉惯性状态估计
视觉里程计的直接方法不需要计算特征描述符,直接使用相机传感器的实际值,因此得到了广泛的应用。因此,它非常快。然而,其准确性和一致性并不令人满意。基于此,提出了一种紧密耦合的、基于优化的惯性测量单元(IMU)与视觉测量融合方法,利用IMU预积分为半直接方法跟踪提供先验状态,并利用视觉里程计的精确状态估计来优化IMU状态估计。此外,我们还结合了Kanade-Lucas-Tomasi跟踪和概率深度滤波器,从而可以有效地跟踪低频率或高频纹理环境中的像素。我们的方法可以利用单目摄像机和IMU获得初始IMU体框架的重力方向和尺度信息。更重要的是,我们不需要任何预先的里程碑点。我们的单目视觉惯性状态估计在基准数据集上速度更快,精度更高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信