{"title":"Pano2Room: Novel View Synthesis from a Single Indoor Panorama","authors":"Guo Pu, Yiming Zhao, Zhouhui Lian","doi":"arxiv-2408.11413","DOIUrl":null,"url":null,"abstract":"Recent single-view 3D generative methods have made significant advancements\nby leveraging knowledge distilled from extensive 3D object datasets. However,\nchallenges persist in the synthesis of 3D scenes from a single view, primarily\ndue to the complexity of real-world environments and the limited availability\nof high-quality prior resources. In this paper, we introduce a novel approach\ncalled Pano2Room, designed to automatically reconstruct high-quality 3D indoor\nscenes from a single panoramic image. These panoramic images can be easily\ngenerated using a panoramic RGBD inpainter from captures at a single location\nwith any camera. The key idea is to initially construct a preliminary mesh from\nthe input panorama, and iteratively refine this mesh using a panoramic RGBD\ninpainter while collecting photo-realistic 3D-consistent pseudo novel views.\nFinally, the refined mesh is converted into a 3D Gaussian Splatting field and\ntrained with the collected pseudo novel views. This pipeline enables the\nreconstruction of real-world 3D scenes, even in the presence of large\nocclusions, and facilitates the synthesis of photo-realistic novel views with\ndetailed geometry. Extensive qualitative and quantitative experiments have been\nconducted to validate the superiority of our method in single-panorama indoor\nnovel synthesis compared to the state-of-the-art. Our code and data are\navailable at \\url{https://github.com/TrickyGo/Pano2Room}.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.11413","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent single-view 3D generative methods have made significant advancements
by leveraging knowledge distilled from extensive 3D object datasets. However,
challenges persist in the synthesis of 3D scenes from a single view, primarily
due to the complexity of real-world environments and the limited availability
of high-quality prior resources. In this paper, we introduce a novel approach
called Pano2Room, designed to automatically reconstruct high-quality 3D indoor
scenes from a single panoramic image. These panoramic images can be easily
generated using a panoramic RGBD inpainter from captures at a single location
with any camera. The key idea is to initially construct a preliminary mesh from
the input panorama, and iteratively refine this mesh using a panoramic RGBD
inpainter while collecting photo-realistic 3D-consistent pseudo novel views.
Finally, the refined mesh is converted into a 3D Gaussian Splatting field and
trained with the collected pseudo novel views. This pipeline enables the
reconstruction of real-world 3D scenes, even in the presence of large
occlusions, and facilitates the synthesis of photo-realistic novel views with
detailed geometry. Extensive qualitative and quantitative experiments have been
conducted to validate the superiority of our method in single-panorama indoor
novel synthesis compared to the state-of-the-art. Our code and data are
available at \url{https://github.com/TrickyGo/Pano2Room}.