Unsupervised Deep Learning of Depth, Ego-Motion, and Optical Flow from Stereo Images

Delong Yang, Zhaohui Luo, Peng Shang, Zhigang Hu
{"title":"Unsupervised Deep Learning of Depth, Ego-Motion, and Optical Flow from Stereo Images","authors":"Delong Yang, Zhaohui Luo, Peng Shang, Zhigang Hu","doi":"10.1109/ICTLE53360.2021.9525746","DOIUrl":null,"url":null,"abstract":"Unsupervised deep learning methods have demonstrated an impressive performance for understanding the structure of 3D scene from videos. These data-based learning methods are able to learn the tasks, such as depth, ego-motion, and optical flow estimation. In this paper, we propose a novel unsupervised deep learning method to jointly estimate scene depth, camera ego-motion, and optical flow from stereo images. Consecutive stereo images are used to train the system. After training stage, the system is able to estimate dense depth map, camera 6D pose, and optical flow by using a sequence of monocular images. No labelled data set is required for training. The supervision signals for training three deep neural networks of the system come from various forms of image warping. Due to the use of optical flow, the impact caused by occlusions and moving objects on the estimation results is alleviated. Experiments on the KITTI and Cityscapes datasets show that the proposed system demonstrates a better performance in terms of accuracy in depth, ego-motion, and optical flow estimation.","PeriodicalId":199084,"journal":{"name":"2021 9th International Conference on Traffic and Logistic Engineering (ICTLE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 9th International Conference on Traffic and Logistic Engineering (ICTLE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTLE53360.2021.9525746","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Unsupervised deep learning methods have demonstrated an impressive performance for understanding the structure of 3D scene from videos. These data-based learning methods are able to learn the tasks, such as depth, ego-motion, and optical flow estimation. In this paper, we propose a novel unsupervised deep learning method to jointly estimate scene depth, camera ego-motion, and optical flow from stereo images. Consecutive stereo images are used to train the system. After training stage, the system is able to estimate dense depth map, camera 6D pose, and optical flow by using a sequence of monocular images. No labelled data set is required for training. The supervision signals for training three deep neural networks of the system come from various forms of image warping. Due to the use of optical flow, the impact caused by occlusions and moving objects on the estimation results is alleviated. Experiments on the KITTI and Cityscapes datasets show that the proposed system demonstrates a better performance in terms of accuracy in depth, ego-motion, and optical flow estimation.
无监督深度学习的深度,自我运动,光流从立体图像
无监督深度学习方法在理解视频中的3D场景结构方面表现出了令人印象深刻的性能。这些基于数据的学习方法能够学习深度、自我运动和光流估计等任务。在本文中,我们提出了一种新的无监督深度学习方法,用于从立体图像中联合估计场景深度、相机自我运动和光流。使用连续的立体图像来训练系统。经过训练阶段,系统能够利用一系列单眼图像估计密集深度图、相机6D姿态和光流。训练不需要标记数据集。训练系统的三个深度神经网络的监督信号来自于各种形式的图像扭曲。由于使用了光流,减轻了遮挡和运动物体对估计结果的影响。在KITTI和cityscape数据集上的实验表明,该系统在深度、自运动和光流估计方面具有更好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信