Multiple Frame Integration for Essential Matrix-based Visual Odometry

H. Nguyen, The-Tien Nguyen, Xuan-Phuc Nguyen, Cong Tran, Q. Nguyen
{"title":"Multiple Frame Integration for Essential Matrix-based Visual Odometry","authors":"H. Nguyen, The-Tien Nguyen, Xuan-Phuc Nguyen, Cong Tran, Q. Nguyen","doi":"10.1109/imcom53663.2022.9721757","DOIUrl":null,"url":null,"abstract":"Visual odometry (VO) is known as an essential part of visual SLAM, it serves as a driving engine of various autonomous navigation systems. Traditional visual odometry recovers the camera motion from a pair of consecutive images, known as the frame-to-frame approach. This paper introduces a multiple frame integration for stereo visual odometry with the aim of reducing drifts by refining transformation and feature location consecutively. Firstly, the rotation is accurately estimated from frame-to-frame VO based on an essential matrix and then refined by utilizing a loop closure constraint of three consecutive camera frames. Secondly, 2D feature locations gradually are updated from their corresponding points in the previous frame through epipolar constraints. An experimental comparison is conducted using a publicly available benchmark dataset, KITTI dataset, which reinforces the accuracy improvement of the proposed approach for both rotation and translation compared to the traditional approaches by around 20% in the same experimental conditions.","PeriodicalId":367038,"journal":{"name":"2022 16th International Conference on Ubiquitous Information Management and Communication (IMCOM)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 16th International Conference on Ubiquitous Information Management and Communication (IMCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/imcom53663.2022.9721757","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Visual odometry (VO) is known as an essential part of visual SLAM, it serves as a driving engine of various autonomous navigation systems. Traditional visual odometry recovers the camera motion from a pair of consecutive images, known as the frame-to-frame approach. This paper introduces a multiple frame integration for stereo visual odometry with the aim of reducing drifts by refining transformation and feature location consecutively. Firstly, the rotation is accurately estimated from frame-to-frame VO based on an essential matrix and then refined by utilizing a loop closure constraint of three consecutive camera frames. Secondly, 2D feature locations gradually are updated from their corresponding points in the previous frame through epipolar constraints. An experimental comparison is conducted using a publicly available benchmark dataset, KITTI dataset, which reinforces the accuracy improvement of the proposed approach for both rotation and translation compared to the traditional approaches by around 20% in the same experimental conditions.
基于基本矩阵视觉里程计的多帧集成
视觉里程计(Visual odometry, VO)是视觉SLAM的重要组成部分,是各种自主导航系统的驱动引擎。传统的视觉里程计从一对连续图像中恢复相机运动,称为帧到帧方法。本文介绍了一种用于立体视觉里程测量的多帧集成方法,其目的是通过连续细化变换和特征定位来减少漂移。首先,基于基本矩阵对帧间VO进行精确估计,然后利用连续三帧的闭环约束对旋转进行细化;其次,通过极外约束,从前一帧的对应点逐步更新二维特征位置;使用公开可用的基准数据集KITTI数据集进行了实验比较,在相同的实验条件下,与传统方法相比,所提出的方法在旋转和平移方面的精度提高了约20%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信