BooM-Vio: Bootstrapped Monocular Visual-Inertial Odometry with Absolute Trajectory Estimation through Unsupervised Deep Learning

Kyle Lindgren, Sarah Leung, W. Nothwang, E. J. Shamwell
{"title":"BooM-Vio: Bootstrapped Monocular Visual-Inertial Odometry with Absolute Trajectory Estimation through Unsupervised Deep Learning","authors":"Kyle Lindgren, Sarah Leung, W. Nothwang, E. J. Shamwell","doi":"10.1109/ICAR46387.2019.8981570","DOIUrl":null,"url":null,"abstract":"Machine learning has emerged as an extraordinary tool for solving many computer vision tasks by extracting and correlating meaningful features from high dimensional inputs in ways that often exceed the best human-derived modeling efforts. However, the area of vision-aided localization remains diverse with many traditional, model-based approaches (i.e. filtering- or nonlinear least- squares- based) often outperforming deep, model-free approaches. In this work, we present Bootstrapped Monocular VIO (BooM), a scaled monocular visual-inertial odometry (VIO) solution that leverages the complex data association ability of model-free approaches with the ability to exploit known geometric dynamics with model-based approaches. Our end-to-end, unsupervised deep neural network simultaneously learns to perform visual-inertial odometry and estimate scene depth while scale is enforced through a loss signal computed from position change magnitude estimates from traditional methods. We evaluate our network against a state-of-the-art (SoA) approach on the KITTI driving dataset as well as a micro aerial vehicle (MAV) dataset that we collected in the AirSim simulation environment. We further demonstrate the benefits of our combined approach through robustness tests on degraded trajectories.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"71 1","pages":"516-522"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 19th International Conference on Advanced Robotics (ICAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR46387.2019.8981570","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Machine learning has emerged as an extraordinary tool for solving many computer vision tasks by extracting and correlating meaningful features from high dimensional inputs in ways that often exceed the best human-derived modeling efforts. However, the area of vision-aided localization remains diverse with many traditional, model-based approaches (i.e. filtering- or nonlinear least- squares- based) often outperforming deep, model-free approaches. In this work, we present Bootstrapped Monocular VIO (BooM), a scaled monocular visual-inertial odometry (VIO) solution that leverages the complex data association ability of model-free approaches with the ability to exploit known geometric dynamics with model-based approaches. Our end-to-end, unsupervised deep neural network simultaneously learns to perform visual-inertial odometry and estimate scene depth while scale is enforced through a loss signal computed from position change magnitude estimates from traditional methods. We evaluate our network against a state-of-the-art (SoA) approach on the KITTI driving dataset as well as a micro aerial vehicle (MAV) dataset that we collected in the AirSim simulation environment. We further demonstrate the benefits of our combined approach through robustness tests on degraded trajectories.
BooM-Vio:通过无监督深度学习的绝对轨迹估计的自引导单目视觉惯性里程计
机器学习已经成为解决许多计算机视觉任务的非凡工具,通过从高维输入中提取和关联有意义的特征,其方式往往超过了最佳的人类建模努力。然而,视觉辅助定位的领域仍然多样化,许多传统的基于模型的方法(即基于滤波或非线性最小二乘的方法)通常优于深度的无模型方法。在这项工作中,我们提出了bootstrap monoocular VIO (BooM),这是一种缩放的单目视觉惯性里程计(VIO)解决方案,它利用了无模型方法的复杂数据关联能力,以及利用基于模型的方法利用已知几何动力学的能力。我们的端到端无监督深度神经网络同时学习执行视觉惯性里程计和估计场景深度,同时通过从传统方法的位置变化幅度估计计算的损失信号来执行尺度。我们根据KITTI驾驶数据集的最先进(SoA)方法以及我们在AirSim模拟环境中收集的微型飞行器(MAV)数据集来评估我们的网络。我们通过对退化轨迹的鲁棒性测试进一步证明了我们的联合方法的好处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信