Surface Reconstruction From Point Clouds: A Survey and a Benchmark

ZhangJin Huang;Yuxin Wen;ZiHao Wang;Jinjuan Ren;Kui Jia
{"title":"Surface Reconstruction From Point Clouds: A Survey and a Benchmark","authors":"ZhangJin Huang;Yuxin Wen;ZiHao Wang;Jinjuan Ren;Kui Jia","doi":"10.1109/TPAMI.2024.3429209","DOIUrl":null,"url":null,"abstract":"Reconstruction of a continuous surface of two-dimensional manifold from its raw, discrete point cloud observation is a long-standing problem in computer vision and graphics research. The problem is technically ill-posed, and becomes more difficult considering that various sensing imperfections would appear in the point clouds obtained by practical depth scanning. In literature, a rich set of methods has been proposed, and reviews of existing methods are also provided. However, existing reviews are short of thorough investigations on a common benchmark. The present paper aims to review and benchmark existing methods in the new era of deep learning surface reconstruction. To this end, we contribute a large-scale benchmarking dataset consisting of both synthetic and real-scanned data; the benchmark includes object- and scene-level surfaces and takes into account various sensing imperfections that are commonly encountered in practical depth scanning. We conduct thorough empirical studies by comparing existing methods on the constructed benchmark, and pay special attention on robustness of existing methods against various scanning imperfections; we also study how different methods generalize in terms of reconstructing complex surface shapes. Our studies help identity the best conditions under which different methods work, and suggest some empirical findings. For example, while deep learning methods are increasingly popular in the research community, our systematic studies suggest that, surprisingly, a few classical methods perform even better in terms of both robustness and generalization; our studies also suggest that the practical challenges of \n<italic>misalignment of point sets from multi-view scanning</i>\n, \n<italic>missing of surface points</i>\n, and \n<italic>point outliers</i>\n remain unsolved by all the existing surface reconstruction methods. We expect that the benchmark and our studies would be valuable both for practitioners and as a guidance for new innovations in future research.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10599623/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Reconstruction of a continuous surface of two-dimensional manifold from its raw, discrete point cloud observation is a long-standing problem in computer vision and graphics research. The problem is technically ill-posed, and becomes more difficult considering that various sensing imperfections would appear in the point clouds obtained by practical depth scanning. In literature, a rich set of methods has been proposed, and reviews of existing methods are also provided. However, existing reviews are short of thorough investigations on a common benchmark. The present paper aims to review and benchmark existing methods in the new era of deep learning surface reconstruction. To this end, we contribute a large-scale benchmarking dataset consisting of both synthetic and real-scanned data; the benchmark includes object- and scene-level surfaces and takes into account various sensing imperfections that are commonly encountered in practical depth scanning. We conduct thorough empirical studies by comparing existing methods on the constructed benchmark, and pay special attention on robustness of existing methods against various scanning imperfections; we also study how different methods generalize in terms of reconstructing complex surface shapes. Our studies help identity the best conditions under which different methods work, and suggest some empirical findings. For example, while deep learning methods are increasingly popular in the research community, our systematic studies suggest that, surprisingly, a few classical methods perform even better in terms of both robustness and generalization; our studies also suggest that the practical challenges of misalignment of point sets from multi-view scanning , missing of surface points , and point outliers remain unsolved by all the existing surface reconstruction methods. We expect that the benchmark and our studies would be valuable both for practitioners and as a guidance for new innovations in future research.
从点云重建地表:调查与基准
从原始的离散点云观测结果重建二维流形的连续曲面,是计算机视觉和图形学研究中一个长期存在的问题。这个问题在技术上是个难题,考虑到实际深度扫描获得的点云中会出现各种传感缺陷,这个问题变得更加困难。文献中提出了一系列丰富的方法,并对现有方法进行了综述。然而,现有的综述缺乏对通用基准的深入研究。本文旨在对深度学习表面重建新时代的现有方法进行回顾和基准测试。为此,我们提供了一个由合成数据和真实扫描数据组成的大规模基准数据集;该基准包括物体级和场景级表面,并考虑了实际深度扫描中常见的各种传感缺陷。我们通过在构建的基准上比较现有方法来进行全面的实证研究,并特别关注现有方法对各种扫描缺陷的鲁棒性;我们还研究了不同方法在重建复杂表面形状方面的通用性。我们的研究有助于确定不同方法发挥作用的最佳条件,并提出了一些经验结论。例如,虽然深度学习方法在研究界越来越受欢迎,但我们的系统研究表明,令人惊讶的是,一些经典方法在鲁棒性和泛化方面的表现甚至更好;我们的研究还表明,现有的所有曲面重建方法仍无法解决多视角扫描点集错位、曲面点缺失和点异常值等实际难题。我们希望该基准和我们的研究不仅对从业人员有价值,还能为未来研究的创新提供指导。我们在 https://Gorilla-Lab-SCUT.github.io/SurfaceReconstructionBenchmark 网站上公开了该基准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信