ORB-SfMLearner: ORB-Guided Self-supervised Visual Odometry with Selective Online Adaptation

Yanlin Jin, Rui-Yang Ju, Haojun Liu, Yuzhong Zhong
{"title":"ORB-SfMLearner: ORB-Guided Self-supervised Visual Odometry with Selective Online Adaptation","authors":"Yanlin Jin, Rui-Yang Ju, Haojun Liu, Yuzhong Zhong","doi":"arxiv-2409.11692","DOIUrl":null,"url":null,"abstract":"Deep visual odometry, despite extensive research, still faces limitations in\naccuracy and generalizability that prevent its broader application. To address\nthese challenges, we propose an Oriented FAST and Rotated BRIEF (ORB)-guided\nvisual odometry with selective online adaptation named ORB-SfMLearner. We\npresent a novel use of ORB features for learning-based ego-motion estimation,\nleading to more robust and accurate results. We also introduce the\ncross-attention mechanism to enhance the explainability of PoseNet and have\nrevealed that driving direction of the vehicle can be explained through\nattention weights, marking a novel exploration in this area. To improve\ngeneralizability, our selective online adaptation allows the network to rapidly\nand selectively adjust to the optimal parameters across different domains.\nExperimental results on KITTI and vKITTI datasets show that our method\noutperforms previous state-of-the-art deep visual odometry methods in terms of\nego-motion accuracy and generalizability.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11692","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep visual odometry, despite extensive research, still faces limitations in accuracy and generalizability that prevent its broader application. To address these challenges, we propose an Oriented FAST and Rotated BRIEF (ORB)-guided visual odometry with selective online adaptation named ORB-SfMLearner. We present a novel use of ORB features for learning-based ego-motion estimation, leading to more robust and accurate results. We also introduce the cross-attention mechanism to enhance the explainability of PoseNet and have revealed that driving direction of the vehicle can be explained through attention weights, marking a novel exploration in this area. To improve generalizability, our selective online adaptation allows the network to rapidly and selectively adjust to the optimal parameters across different domains. Experimental results on KITTI and vKITTI datasets show that our method outperforms previous state-of-the-art deep visual odometry methods in terms of ego-motion accuracy and generalizability.
ORB-SfMLearner:具有选择性在线适应功能的 ORB 引导的自监督视觉测距仪
尽管进行了广泛的研究,深度视觉里程测量仍面临着不准确性和通用性的限制,这阻碍了它的广泛应用。为了应对这些挑战,我们提出了一种具有选择性在线适应功能的定向快速和旋转简短(ORB)引导的视觉里程计,命名为ORB-SfMLearner。我们将 ORB 特征用于基于学习的自我运动估计,从而获得更稳健、更准确的结果。我们还引入了交叉注意力机制来增强 PoseNet 的可解释性,并揭示了车辆的行驶方向可以通过注意力权重来解释,这标志着这一领域的新探索。在 KITTI 和 vKITTI 数据集上的实验结果表明,我们的方法在目标运动准确性和通用性方面优于之前最先进的深度视觉里程测量方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信