Z. Chen, Hantao Wang, Lijun Wu, Yanlin Zhou, Dapeng Oliver Wu
{"title":"Spatiotemporal Guided Self-Supervised Depth Completion from LiDAR and Monocular Camera","authors":"Z. Chen, Hantao Wang, Lijun Wu, Yanlin Zhou, Dapeng Oliver Wu","doi":"10.1109/VCIP49819.2020.9301857","DOIUrl":null,"url":null,"abstract":"Depth completion aims to estimate dense depth maps from sparse depth measurements. It has become increasingly important in autonomous driving and thus has drawn wide attention. In this paper, we introduce photometric losses in both spatial and time domains to jointly guide self-supervised depth completion. This method performs an accurate end-to-end depth completion of vision tasks by using LiDAR and a monocular camera. In particular, we full utilize the consistent information inside the temporally adjacent frames and the stereo vision to improve the accuracy of depth completion in the model training phase. We design a self-supervised framework to eliminate the negative effects of moving objects and the region with smooth gradients. Experiments are conducted on KITTI. Results indicate that our self-supervised method can attain competitive performance.","PeriodicalId":431880,"journal":{"name":"2020 IEEE International Conference on Visual Communications and Image Processing (VCIP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP49819.2020.9301857","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Depth completion aims to estimate dense depth maps from sparse depth measurements. It has become increasingly important in autonomous driving and thus has drawn wide attention. In this paper, we introduce photometric losses in both spatial and time domains to jointly guide self-supervised depth completion. This method performs an accurate end-to-end depth completion of vision tasks by using LiDAR and a monocular camera. In particular, we full utilize the consistent information inside the temporally adjacent frames and the stereo vision to improve the accuracy of depth completion in the model training phase. We design a self-supervised framework to eliminate the negative effects of moving objects and the region with smooth gradients. Experiments are conducted on KITTI. Results indicate that our self-supervised method can attain competitive performance.