Self-Supervised Monocular Depth Estimation Scale Recovery using RANSAC Outlier Removal

Zhuoyue Wu, G. Zhuo, Feng Xue
{"title":"Self-Supervised Monocular Depth Estimation Scale Recovery using RANSAC Outlier Removal","authors":"Zhuoyue Wu, G. Zhuo, Feng Xue","doi":"10.1109/CVCI51460.2020.9338538","DOIUrl":null,"url":null,"abstract":"Recently, self-supervised method has become an increasingly significant branch of depth estimation task, especially in the realm of autonomous driving applications. However, current per-pixel depth maps yielded from RGB images still suffer from uncertain scale factor generated by the nature of monocular image sequences, which further leads to the insufficiency in practical use. In this work, we first analyze such scale uncertainty both theoretically and practically. Then we perform scale recovery utilizing geometric constraint to estimate accurate scale factor, RANSAC(Random sample consensus) outlier removal is introduced into pipeline to obtain accurate ground point extraction. Adequate experiments on KITTI dataset(dataset generated by an autonomous driving platform built up by KIT and TRINA comprising stereo and optical flow image pairs as well as laser data, distributed to train set and test set on account of deep learning), show that, using only camera height prior, our proposed method, though not relying on additional sensors, is able to achieve accurate scale recovery and outperform existing scale recovery methods.","PeriodicalId":119721,"journal":{"name":"2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVCI51460.2020.9338538","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Recently, self-supervised method has become an increasingly significant branch of depth estimation task, especially in the realm of autonomous driving applications. However, current per-pixel depth maps yielded from RGB images still suffer from uncertain scale factor generated by the nature of monocular image sequences, which further leads to the insufficiency in practical use. In this work, we first analyze such scale uncertainty both theoretically and practically. Then we perform scale recovery utilizing geometric constraint to estimate accurate scale factor, RANSAC(Random sample consensus) outlier removal is introduced into pipeline to obtain accurate ground point extraction. Adequate experiments on KITTI dataset(dataset generated by an autonomous driving platform built up by KIT and TRINA comprising stereo and optical flow image pairs as well as laser data, distributed to train set and test set on account of deep learning), show that, using only camera height prior, our proposed method, though not relying on additional sensors, is able to achieve accurate scale recovery and outperform existing scale recovery methods.
基于RANSAC离群值去除的自监督单目深度估计尺度恢复
近年来,自监督方法已成为深度估计任务的一个日益重要的分支,特别是在自动驾驶应用领域。然而,目前由RGB图像生成的每像素深度图仍然受到单眼图像序列性质所产生的不确定比例因子的影响,这进一步导致了实际应用中的不足。本文首先从理论和实践两方面对这种尺度不确定性进行了分析。然后利用几何约束进行比例尺恢复,估计准确的比例尺因子,在管道中引入RANSAC(Random sample consensus)离群值去除,获得精确的地面点提取。在KITTI数据集(由KIT和TRINA建立的自动驾驶平台生成的数据集,包括立体和光流图像对以及激光数据,通过深度学习分布到训练集和测试集)上进行的充分实验表明,仅使用相机高度先验,我们提出的方法虽然不依赖于额外的传感器,但能够实现准确的尺度恢复,并且优于现有的尺度恢复方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信