Trinocular visual odometry for divergent views with minimal overlap

Jaeheon Jeong, J. Mulligan, N. Correll
{"title":"Trinocular visual odometry for divergent views with minimal overlap","authors":"Jaeheon Jeong, J. Mulligan, N. Correll","doi":"10.1109/WORV.2013.6521943","DOIUrl":null,"url":null,"abstract":"We present a visual odometry algorithm for trinocular systems with divergent views and minimal overlap. Whereas the bundle adjustment is the preferred method for multi-view visual odometry problems, it is infeasible if the number of features in the images-such as in HD videos-is large. We propose a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. Unlike the bundle adjustment method, whose computational complexity is O(n3), the proposed approach allows to match features only between neighboring cameras and can therefore be executed in O(n2). Assuming constant motion of the cameras, temporal tracking therefore allows us to make up for the missing overlap between cameras as objects from the center view eventually appear in the left or right camera. The scale factors that cannot be determined by monocular visual odometry are computed by constructing a system of equations based on known relative camera pose and the five monocular VO estimates. The system is solved using a weighted least squares scheme and remains over-defined even when the camera path follows a straight line. We evaluate the resulting system using synthetic and real video sequences that were recorded for a virtual exercise environment.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE Workshop on Robot Vision (WORV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WORV.2013.6521943","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

We present a visual odometry algorithm for trinocular systems with divergent views and minimal overlap. Whereas the bundle adjustment is the preferred method for multi-view visual odometry problems, it is infeasible if the number of features in the images-such as in HD videos-is large. We propose a divide and conquer approach, which reduces the trinocular visual odometry problem to five monocular visual odometry problems, one for each individual camera sequence and two more using features matched temporally from consecutive images from the center to the left and right cameras, respectively. Unlike the bundle adjustment method, whose computational complexity is O(n3), the proposed approach allows to match features only between neighboring cameras and can therefore be executed in O(n2). Assuming constant motion of the cameras, temporal tracking therefore allows us to make up for the missing overlap between cameras as objects from the center view eventually appear in the left or right camera. The scale factors that cannot be determined by monocular visual odometry are computed by constructing a system of equations based on known relative camera pose and the five monocular VO estimates. The system is solved using a weighted least squares scheme and remains over-defined even when the camera path follows a straight line. We evaluate the resulting system using synthetic and real video sequences that were recorded for a virtual exercise environment.
用最小重叠的不同视角进行三目视觉里程测定
我们提出了一种具有发散视点和最小重叠的三视系统的视觉里程计算法。虽然束调整是多视图视觉里程计问题的首选方法,但如果图像中的特征数量(例如高清视频)很大,则不可行。我们提出了一种分而治之的方法,该方法将三目视觉里程计问题减少到五个单目视觉里程计问题,每个单目视觉里程计问题一个,另外两个使用从中心到左和右的连续图像分别使用时间匹配的特征。与计算复杂度为O(n3)的束平差方法不同,该方法只允许在相邻摄像机之间匹配特征,因此可以在O(n2)内执行。假设摄像机的恒定运动,因此,时间跟踪允许我们弥补摄像机之间缺失的重叠,因为来自中心视图的对象最终出现在左侧或右侧摄像机中。对于单目视觉里程计无法确定的尺度因子,通过基于已知的相对相机姿态和5个单目VO估计值构建方程系统来计算。该系统采用加权最小二乘方案求解,即使相机路径沿直线移动,也会保持过度定义。我们使用合成和真实的视频序列来评估结果系统,这些视频序列是为虚拟运动环境录制的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信