基于联合深度和正态估计的深度稀疏补全

Ying Li, Cheolkon Jung
{"title":"基于联合深度和正态估计的深度稀疏补全","authors":"Ying Li, Cheolkon Jung","doi":"10.1109/ISCAS46773.2023.10181618","DOIUrl":null,"url":null,"abstract":"Depth completion densifies sparse depth images obtained from LiDAR and is a great challenge due to the given extremely sparse information. In this paper, we propose deep sparse depth completion using joint depth and normal estimation. There exists a mutually convertible geometric relationship between depth and surface normal in 3D coordinate space. Based on the geometric relationship, we build a novel adversarial model that consists of one generator and two discriminators. We adopt an encoder-decoder structure for the generator. The encoder extracts features from RGB image, sparse depth image and its binary mask that represent the inherent geometric relationship between depth and surface normal, while two decoders with the same structure generate dense depth and surface normal based on the geometric relationship. We utilize two discriminators to generate guide information for sparse depth completion from the input RGB image while imposing an auxiliary geometric constraint for depth refinement. Experimental results on KITTI dataset show that the proposed method generates dense depth images with accurate object boundaries and outperforms state-of-the-art ones in terms of visual quality and quantitative measurements.","PeriodicalId":177320,"journal":{"name":"2023 IEEE International Symposium on Circuits and Systems (ISCAS)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Sparse Depth Completion Using Joint Depth and Normal Estimation\",\"authors\":\"Ying Li, Cheolkon Jung\",\"doi\":\"10.1109/ISCAS46773.2023.10181618\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Depth completion densifies sparse depth images obtained from LiDAR and is a great challenge due to the given extremely sparse information. In this paper, we propose deep sparse depth completion using joint depth and normal estimation. There exists a mutually convertible geometric relationship between depth and surface normal in 3D coordinate space. Based on the geometric relationship, we build a novel adversarial model that consists of one generator and two discriminators. We adopt an encoder-decoder structure for the generator. The encoder extracts features from RGB image, sparse depth image and its binary mask that represent the inherent geometric relationship between depth and surface normal, while two decoders with the same structure generate dense depth and surface normal based on the geometric relationship. We utilize two discriminators to generate guide information for sparse depth completion from the input RGB image while imposing an auxiliary geometric constraint for depth refinement. Experimental results on KITTI dataset show that the proposed method generates dense depth images with accurate object boundaries and outperforms state-of-the-art ones in terms of visual quality and quantitative measurements.\",\"PeriodicalId\":177320,\"journal\":{\"name\":\"2023 IEEE International Symposium on Circuits and Systems (ISCAS)\",\"volume\":\"90 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Symposium on Circuits and Systems (ISCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCAS46773.2023.10181618\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Symposium on Circuits and Systems (ISCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCAS46773.2023.10181618","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度补全使激光雷达获得的稀疏深度图像变得更加密集,由于给定的信息非常稀疏,这是一个很大的挑战。本文提出了一种基于联合深度和正态估计的深度稀疏补全方法。在三维坐标空间中,深度与表面法线之间存在一种相互转换的几何关系。基于几何关系,我们建立了一个由一个生成器和两个鉴别器组成的新型对抗模型。发生器采用编码器-解码器结构。编码器从RGB图像、稀疏深度图像及其二值掩模中提取表征深度与表面法线内在几何关系的特征,两个结构相同的解码器根据几何关系生成密集深度与表面法线。我们利用两个鉴别器从输入的RGB图像中生成稀疏深度补全的引导信息,同时施加辅助的几何约束进行深度细化。在KITTI数据集上的实验结果表明,该方法可以生成具有精确目标边界的密集深度图像,并且在视觉质量和定量测量方面优于现有的深度图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Sparse Depth Completion Using Joint Depth and Normal Estimation
Depth completion densifies sparse depth images obtained from LiDAR and is a great challenge due to the given extremely sparse information. In this paper, we propose deep sparse depth completion using joint depth and normal estimation. There exists a mutually convertible geometric relationship between depth and surface normal in 3D coordinate space. Based on the geometric relationship, we build a novel adversarial model that consists of one generator and two discriminators. We adopt an encoder-decoder structure for the generator. The encoder extracts features from RGB image, sparse depth image and its binary mask that represent the inherent geometric relationship between depth and surface normal, while two decoders with the same structure generate dense depth and surface normal based on the geometric relationship. We utilize two discriminators to generate guide information for sparse depth completion from the input RGB image while imposing an auxiliary geometric constraint for depth refinement. Experimental results on KITTI dataset show that the proposed method generates dense depth images with accurate object boundaries and outperforms state-of-the-art ones in terms of visual quality and quantitative measurements.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信