LT3SD: Latent Trees for 3D Scene Diffusion

Quan Meng, Lei Li, Matthias Nießner, Angela Dai
{"title":"LT3SD: Latent Trees for 3D Scene Diffusion","authors":"Quan Meng, Lei Li, Matthias Nießner, Angela Dai","doi":"arxiv-2409.08215","DOIUrl":null,"url":null,"abstract":"We present LT3SD, a novel latent diffusion model for large-scale 3D scene\ngeneration. Recent advances in diffusion models have shown impressive results\nin 3D object generation, but are limited in spatial extent and quality when\nextended to 3D scenes. To generate complex and diverse 3D scene structures, we\nintroduce a latent tree representation to effectively encode both\nlower-frequency geometry and higher-frequency detail in a coarse-to-fine\nhierarchy. We can then learn a generative diffusion process in this latent 3D\nscene space, modeling the latent components of a scene at each resolution\nlevel. To synthesize large-scale scenes with varying sizes, we train our\ndiffusion model on scene patches and synthesize arbitrary-sized output 3D\nscenes through shared diffusion generation across multiple scene patches.\nThrough extensive experiments, we demonstrate the efficacy and benefits of\nLT3SD for large-scale, high-quality unconditional 3D scene generation and for\nprobabilistic completion for partial scene observations.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08215","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We present LT3SD, a novel latent diffusion model for large-scale 3D scene generation. Recent advances in diffusion models have shown impressive results in 3D object generation, but are limited in spatial extent and quality when extended to 3D scenes. To generate complex and diverse 3D scene structures, we introduce a latent tree representation to effectively encode both lower-frequency geometry and higher-frequency detail in a coarse-to-fine hierarchy. We can then learn a generative diffusion process in this latent 3D scene space, modeling the latent components of a scene at each resolution level. To synthesize large-scale scenes with varying sizes, we train our diffusion model on scene patches and synthesize arbitrary-sized output 3D scenes through shared diffusion generation across multiple scene patches. Through extensive experiments, we demonstrate the efficacy and benefits of LT3SD for large-scale, high-quality unconditional 3D scene generation and for probabilistic completion for partial scene observations.
LT3SD:用于三维场景扩散的潜影树
我们介绍了 LT3SD,这是一种用于大规模三维场景生成的新型潜在扩散模型。扩散模型的最新进展在三维物体生成方面取得了令人印象深刻的成果,但当扩展到三维场景时,其空间范围和质量都受到了限制。为了生成复杂多样的三维场景结构,我们引入了潜树表示法,以从粗到细的层次结构有效地编码低频几何图形和高频细节。然后,我们可以在这个潜在的三维场景空间中学习一个生成扩散过程,在每个分辨率级别上对场景的潜在成分进行建模。为了合成不同大小的大规模场景,我们在场景补丁上训练扩散模型,并通过在多个场景补丁上共享扩散生成来合成任意大小的输出三维场景。通过大量实验,我们证明了 LT3SD 在大规模、高质量无条件三维场景生成和部分场景观测的概率完成方面的功效和优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信