Dense-depth-net: a spatial-temporal approach on depth completion task

Tri-Hai Nguyen, Myungsik Yoo
{"title":"Dense-depth-net: a spatial-temporal approach on depth completion task","authors":"Tri-Hai Nguyen, Myungsik Yoo","doi":"10.1109/TENSYMP52854.2021.9550990","DOIUrl":null,"url":null,"abstract":"Depth completion is essential functionality in the perception system of an autonomous vehicle. With various convolution neural networks (CNN), scene geometric representation has been studied extensively under supervised learning or self-supervised learning. This paper utilizes recurrent neural networks (RNNs) to investigate temporal information from camera video sequences, which can help mitigate the mismatch between two consecutive data frames. Our paper proposed an architecture consisting of two sequence processing: the spatial exploitation stage built from a two-branches network and the temporal exploitation stage, a novel convolutional LSTM (ConvLSTM). Furthermore, we take the ability of long short-term memory (LSTM)-based RNNs to estimate a one-step depth map as an additional role of the representations of objects not only in a data frame but also in its temporal neighborhood. Moreover, the proposed ConvLSTM network demonstrated to have the option to make depth forecasts for future or occluded parts of an image frame. We evaluate the performance of the proposed architecture on the KITTI dataset and achieve the result proving to improve accuracy via a supervised-learning.","PeriodicalId":137485,"journal":{"name":"2021 IEEE Region 10 Symposium (TENSYMP)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Region 10 Symposium (TENSYMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TENSYMP52854.2021.9550990","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Depth completion is essential functionality in the perception system of an autonomous vehicle. With various convolution neural networks (CNN), scene geometric representation has been studied extensively under supervised learning or self-supervised learning. This paper utilizes recurrent neural networks (RNNs) to investigate temporal information from camera video sequences, which can help mitigate the mismatch between two consecutive data frames. Our paper proposed an architecture consisting of two sequence processing: the spatial exploitation stage built from a two-branches network and the temporal exploitation stage, a novel convolutional LSTM (ConvLSTM). Furthermore, we take the ability of long short-term memory (LSTM)-based RNNs to estimate a one-step depth map as an additional role of the representations of objects not only in a data frame but also in its temporal neighborhood. Moreover, the proposed ConvLSTM network demonstrated to have the option to make depth forecasts for future or occluded parts of an image frame. We evaluate the performance of the proposed architecture on the KITTI dataset and achieve the result proving to improve accuracy via a supervised-learning.
深度补全任务的一种时空方法
深度补全是自动驾驶汽车感知系统的基本功能。各种卷积神经网络(CNN)在监督学习和自监督学习下对场景几何表示进行了广泛的研究。本文利用递归神经网络(RNNs)来研究摄像机视频序列的时间信息,这有助于减轻两个连续数据帧之间的不匹配。本文提出了一种由两个序列处理组成的体系结构:由两分支网络构建的空间挖掘阶段和时间挖掘阶段,一种新颖的卷积LSTM (ConvLSTM)。此外,我们将基于长短期记忆(LSTM)的rnn估计一步深度图的能力作为数据帧及其时间邻域中对象表示的额外角色。此外,所提出的ConvLSTM网络证明可以选择对图像帧的未来或遮挡部分进行深度预测。我们在KITTI数据集上评估了所提出的架构的性能,并通过监督学习获得了证明可以提高准确性的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信