Tri-Perspective View Decomposition for Geometry Aware Depth Completion and Super-Resolution.

IF 18.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhiqiang Yan,Kun Wang,Xiang Li,Guangwei Gao,Jun Li,Jian Yang
{"title":"Tri-Perspective View Decomposition for Geometry Aware Depth Completion and Super-Resolution.","authors":"Zhiqiang Yan,Kun Wang,Xiang Li,Guangwei Gao,Jun Li,Jian Yang","doi":"10.1109/tpami.2025.3596391","DOIUrl":null,"url":null,"abstract":"Depth completion and super-resolution are crucial tasks for comprehensive RGB-D scene understanding, as they involve reconstructing the precise 3D geometry of a scene from sparse or low-resolution depth measurements. However, most existing methods either rely solely on 2D depth representations or directly incorporate raw 3D point clouds for compensation, which are still insufficient to capture the fine-grained 3D geometry of the scene. In this paper, we introduce Tri-Perspective View Decomposition (TPVD) frameworks that can explicitly model 3D geometry. To this end, (1) TPVD ingeniously decomposes the original 3D point cloud into three 2D views, one of which corresponds to the sparse or low-resolution depth input. (2) For sufficient geometric interaction, TPV Fusion is designed to update the 2D TPV features through recurrent 2D-3D-2D aggregation. (3) By adaptively searching for TPV affinitive neighbors, two additional refinement heads are developed for these two tasks to further improve the geometric consistency. Meanwhile, we build novel datasets named TOFDC for depth completion and TOFDSR for depth super-resolution. Both datasets are acquired using time-of-flight (TOF) sensors and color cameras on smartphones. Extensive experiments on TOFDC, KITTI, NYUv2, SUN RGBD, VKITTI, TOFDSR, RGB-D-D, Lu, and Middlebury datasets indicate that our TPVD outperforms previous depth completion and super-resolution methods, reaching the state of the art.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"27 1","pages":""},"PeriodicalIF":18.6000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3596391","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Depth completion and super-resolution are crucial tasks for comprehensive RGB-D scene understanding, as they involve reconstructing the precise 3D geometry of a scene from sparse or low-resolution depth measurements. However, most existing methods either rely solely on 2D depth representations or directly incorporate raw 3D point clouds for compensation, which are still insufficient to capture the fine-grained 3D geometry of the scene. In this paper, we introduce Tri-Perspective View Decomposition (TPVD) frameworks that can explicitly model 3D geometry. To this end, (1) TPVD ingeniously decomposes the original 3D point cloud into three 2D views, one of which corresponds to the sparse or low-resolution depth input. (2) For sufficient geometric interaction, TPV Fusion is designed to update the 2D TPV features through recurrent 2D-3D-2D aggregation. (3) By adaptively searching for TPV affinitive neighbors, two additional refinement heads are developed for these two tasks to further improve the geometric consistency. Meanwhile, we build novel datasets named TOFDC for depth completion and TOFDSR for depth super-resolution. Both datasets are acquired using time-of-flight (TOF) sensors and color cameras on smartphones. Extensive experiments on TOFDC, KITTI, NYUv2, SUN RGBD, VKITTI, TOFDSR, RGB-D-D, Lu, and Middlebury datasets indicate that our TPVD outperforms previous depth completion and super-resolution methods, reaching the state of the art.
几何感知深度补全和超分辨率的三视角视图分解。
深度完成和超分辨率是全面理解RGB-D场景的关键任务,因为它们涉及从稀疏或低分辨率深度测量中重建场景的精确3D几何形状。然而,大多数现有的方法要么仅仅依赖于2D深度表示,要么直接结合原始的3D点云进行补偿,这仍然不足以捕捉场景的细粒度3D几何形状。在本文中,我们引入了可以显式建模三维几何的三视角视图分解(TPVD)框架。为此,(1)TPVD巧妙地将原始3D点云分解为三个2D视图,其中一个对应于稀疏或低分辨率深度输入。(2)为了充分的几何交互作用,TPV Fusion设计通过反复的2D- 3d -2D聚合来更新二维TPV特征。(3)通过自适应搜索TPV亲和邻居,为这两个任务开发了两个额外的细化头,以进一步提高几何一致性。同时,构建了深度补全的TOFDC数据集和深度超分辨率的TOFDSR数据集。这两个数据集都是通过智能手机上的飞行时间(TOF)传感器和彩色相机获得的。在TOFDC、KITTI、NYUv2、SUN RGBD、VKITTI、TOFDSR、RGB-D-D、Lu和Middlebury数据集上的大量实验表明,我们的TPVD优于以往的深度补全和超分辨率方法,达到了最先进的水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
28.40
自引率
3.00%
发文量
885
审稿时长
8.5 months
期刊介绍: The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信