Pano2Room: Novel View Synthesis from a Single Indoor Panorama

Guo Pu, Yiming Zhao, Zhouhui Lian
{"title":"Pano2Room: Novel View Synthesis from a Single Indoor Panorama","authors":"Guo Pu, Yiming Zhao, Zhouhui Lian","doi":"arxiv-2408.11413","DOIUrl":null,"url":null,"abstract":"Recent single-view 3D generative methods have made significant advancements\nby leveraging knowledge distilled from extensive 3D object datasets. However,\nchallenges persist in the synthesis of 3D scenes from a single view, primarily\ndue to the complexity of real-world environments and the limited availability\nof high-quality prior resources. In this paper, we introduce a novel approach\ncalled Pano2Room, designed to automatically reconstruct high-quality 3D indoor\nscenes from a single panoramic image. These panoramic images can be easily\ngenerated using a panoramic RGBD inpainter from captures at a single location\nwith any camera. The key idea is to initially construct a preliminary mesh from\nthe input panorama, and iteratively refine this mesh using a panoramic RGBD\ninpainter while collecting photo-realistic 3D-consistent pseudo novel views.\nFinally, the refined mesh is converted into a 3D Gaussian Splatting field and\ntrained with the collected pseudo novel views. This pipeline enables the\nreconstruction of real-world 3D scenes, even in the presence of large\nocclusions, and facilitates the synthesis of photo-realistic novel views with\ndetailed geometry. Extensive qualitative and quantitative experiments have been\nconducted to validate the superiority of our method in single-panorama indoor\nnovel synthesis compared to the state-of-the-art. Our code and data are\navailable at \\url{https://github.com/TrickyGo/Pano2Room}.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.11413","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent single-view 3D generative methods have made significant advancements by leveraging knowledge distilled from extensive 3D object datasets. However, challenges persist in the synthesis of 3D scenes from a single view, primarily due to the complexity of real-world environments and the limited availability of high-quality prior resources. In this paper, we introduce a novel approach called Pano2Room, designed to automatically reconstruct high-quality 3D indoor scenes from a single panoramic image. These panoramic images can be easily generated using a panoramic RGBD inpainter from captures at a single location with any camera. The key idea is to initially construct a preliminary mesh from the input panorama, and iteratively refine this mesh using a panoramic RGBD inpainter while collecting photo-realistic 3D-consistent pseudo novel views. Finally, the refined mesh is converted into a 3D Gaussian Splatting field and trained with the collected pseudo novel views. This pipeline enables the reconstruction of real-world 3D scenes, even in the presence of large occlusions, and facilitates the synthesis of photo-realistic novel views with detailed geometry. Extensive qualitative and quantitative experiments have been conducted to validate the superiority of our method in single-panorama indoor novel synthesis compared to the state-of-the-art. Our code and data are available at \url{https://github.com/TrickyGo/Pano2Room}.
Pano2Room:从单一室内全景图合成新颖视图
最近的单视角三维生成方法利用从大量三维物体数据集中提炼出的知识,取得了重大进展。然而,从单一视角合成三维场景的挑战依然存在,这主要是由于现实世界环境的复杂性和高质量先验资源的有限性。在本文中,我们介绍了一种名为 Pano2Room 的新方法,旨在从单个全景图像自动重建高质量的三维室内场景。这些全景图像可以使用全景 RGBD inpainter 从单个位置的任意摄像头捕捉的图像中轻松生成。其主要思路是,首先根据输入的全景图像构建初步网格,然后使用全景 RGBD inpainter 迭代完善该网格,同时收集逼真的 3D 一致伪新视图。即使在存在大量夹杂物的情况下,该流水线也能构建真实世界的三维场景,并有助于合成具有详细几何图形的照片般逼真的新视图。我们进行了大量定性和定量实验,验证了我们的方法在单全景室内小说合成方面优于最先进的方法。我们的代码和数据可在(url{https://github.com/TrickyGo/Pano2Room}.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信