NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field

Celong Liu, Zhong Li, Junsong Yuan, Yi Xu
{"title":"NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field","authors":"Celong Liu, Zhong Li, Junsong Yuan, Yi Xu","doi":"10.2312/sr.20221156","DOIUrl":null,"url":null,"abstract":"In this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a two-plane parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a 4D function that maps 4D coordinates to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Per-ray depth can be optionally predicted by the network, thus enabling applications such as auto refocus. Our novel view synthesis results are comparable to the state-of-the-arts, and even superior in some challenging scenes with refraction and reflection. We achieve this while maintaining an interactive frame rate and a small memory footprint.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eurographics Symposium on Rendering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2312/sr.20221156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

Abstract

In this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a two-plane parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a 4D function that maps 4D coordinates to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Per-ray depth can be optionally predicted by the network, thus enabling applications such as auto refocus. Our novel view synthesis results are comparable to the state-of-the-arts, and even superior in some challenging scenes with refraction and reflection. We achieve this while maintaining an interactive frame rate and a small memory footprint.
NeuLF:基于神经4D光场的高效新型视图合成
在本文中,我们提出了一种高效且鲁棒的深度学习解决方案,用于复杂场景的新视图合成。在我们的方法中,3D场景被表示为一个光场,即一组光线,每条光线在到达图像平面时都有相应的颜色。为了高效地呈现新颖的视图,我们采用了光场的两平面参数化,其中每条光线都由一个4D参数表征。然后,我们将光场表述为4D函数,将4D坐标映射到相应的颜色值。我们训练了一个深度全连接网络来优化这个隐式函数并记忆3D场景。然后,使用场景特定模型来合成新的视图。与以往的光场方法需要密集的视图采样才能可靠地渲染新视图不同,我们的方法可以通过直接从网络中采样光线并查询每条光线的颜色来渲染新视图,从而使用更稀疏的训练图像集实现高质量的光场渲染。网络可以选择性地预测每条光线的深度,从而实现自动重聚焦等应用。我们的新视图合成结果可与最先进的技术相媲美,在一些具有折射和反射的挑战性场景中甚至更胜一筹。我们在保持交互帧率和小内存占用的同时实现了这一点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信