RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel View Synthesis

Hugo Blanc, Jean-Emmanuel Deschaud, Alexis Paljic
{"title":"RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel View Synthesis","authors":"Hugo Blanc, Jean-Emmanuel Deschaud, Alexis Paljic","doi":"arxiv-2408.03356","DOIUrl":null,"url":null,"abstract":"Differentiable volumetric rendering-based methods made significant progress\nin novel view synthesis. On one hand, innovative methods have replaced the\nNeural Radiance Fields (NeRF) network with locally parameterized structures,\nenabling high-quality renderings in a reasonable time. On the other hand,\napproaches have used differentiable splatting instead of NeRF's ray casting to\noptimize radiance fields rapidly using Gaussian kernels, allowing for fine\nadaptation to the scene. However, differentiable ray casting of irregularly\nspaced kernels has been scarcely explored, while splatting, despite enabling\nfast rendering times, is susceptible to clearly visible artifacts. Our work closes this gap by providing a physically consistent formulation of\nthe emitted radiance c and density {\\sigma}, decomposed with Gaussian functions\nassociated with Spherical Gaussians/Harmonics for all-frequency colorimetric\nrepresentation. We also introduce a method enabling differentiable ray casting\nof irregularly distributed Gaussians using an algorithm that integrates\nradiance fields slab by slab and leverages a BVH structure. This allows our\napproach to finely adapt to the scene while avoiding splatting artifacts. As a\nresult, we achieve superior rendering quality compared to the state-of-the-art\nwhile maintaining reasonable training times and achieving inference speeds of\n25 FPS on the Blender dataset. Project page with videos and code:\nhttps://raygauss.github.io/","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"41 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.03356","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Differentiable volumetric rendering-based methods made significant progress in novel view synthesis. On one hand, innovative methods have replaced the Neural Radiance Fields (NeRF) network with locally parameterized structures, enabling high-quality renderings in a reasonable time. On the other hand, approaches have used differentiable splatting instead of NeRF's ray casting to optimize radiance fields rapidly using Gaussian kernels, allowing for fine adaptation to the scene. However, differentiable ray casting of irregularly spaced kernels has been scarcely explored, while splatting, despite enabling fast rendering times, is susceptible to clearly visible artifacts. Our work closes this gap by providing a physically consistent formulation of the emitted radiance c and density {\sigma}, decomposed with Gaussian functions associated with Spherical Gaussians/Harmonics for all-frequency colorimetric representation. We also introduce a method enabling differentiable ray casting of irregularly distributed Gaussians using an algorithm that integrates radiance fields slab by slab and leverages a BVH structure. This allows our approach to finely adapt to the scene while avoiding splatting artifacts. As a result, we achieve superior rendering quality compared to the state-of-the-art while maintaining reasonable training times and achieving inference speeds of 25 FPS on the Blender dataset. Project page with videos and code: https://raygauss.github.io/
RayGauss:基于体积高斯的光线铸造,实现逼真的新颖视图合成
基于可变容积渲染的方法在新型视图合成方面取得了重大进展。一方面,创新方法用局部参数化结构取代了神经辐射场(Neural Radiance Fields,NeRF)网络,在合理的时间内实现了高质量的渲染。另一方面,一些方法使用可微分溅射代替 NeRF 的光线投射,利用高斯核快速优化辐射场,从而实现对场景的精细适应。然而,对不规则内核的可微分光线投射还鲜有探索,而溅射虽然能加快渲染时间,却容易产生明显的伪影。我们的工作填补了这一空白,提供了一种物理上一致的发射辐射度 c 和密度 {\sigma} 的表述方法,并使用与球形高斯/谐波相关的高斯函数进行分解,以实现全频率的色度表示。我们还引入了一种方法,利用一种逐板集成辐射场并利用 BVH 结构的算法,对不规则分布的高斯进行可微分的射线投射。这使得我们的方法能够在避免溅射伪影的同时,精细地适应场景。因此,与最先进的技术相比,我们实现了更高的渲染质量,同时保持了合理的训练时间,并在 Blender 数据集上实现了 25 FPS 的推理速度。包含视频和代码的项目页面:https://raygauss.github.io/
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信