De-NeRF: Ultra-high-definition NeRF with deformable net alignment

IF 0.9 4区 计算机科学 Q4 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Jianing Hou, Runjie Zhang, Zhongqi Wu, Weiliang Meng, Xiaopeng Zhang, Jianwei Guo
{"title":"De-NeRF: Ultra-high-definition NeRF with deformable net alignment","authors":"Jianing Hou,&nbsp;Runjie Zhang,&nbsp;Zhongqi Wu,&nbsp;Weiliang Meng,&nbsp;Xiaopeng Zhang,&nbsp;Jianwei Guo","doi":"10.1002/cav.2240","DOIUrl":null,"url":null,"abstract":"<p>Neural Radiance Field (NeRF) can render complex 3D scenes with viewpoint-dependent effects. However, less work has been devoted to exploring its limitations in high-resolution environments, especially when upscaled to ultra-high resolution (e.g., 4k). Specifically, existing NeRF-based methods face severe limitations in reconstructing high-resolution real scenes, for example, a large number of parameters, misalignment of the input data, and over-smoothing of details. In this paper, we present a novel and effective framework, called <i>De-NeRF</i>, based on NeRF and deformable convolutional network, to achieve high-fidelity view synthesis in ultra-high resolution scenes: (1) marrying the deformable convolution unit which can solve the problem of misaligned input of the high-resolution data. (2) Presenting a density sparse voxel-based approach which can greatly reduce the training time while rendering results with higher accuracy. Compared to existing high-resolution NeRF methods, our approach improves the rendering quality of high-frequency details and achieves better visual effects in 4K high-resolution scenes.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cav.2240","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Neural Radiance Field (NeRF) can render complex 3D scenes with viewpoint-dependent effects. However, less work has been devoted to exploring its limitations in high-resolution environments, especially when upscaled to ultra-high resolution (e.g., 4k). Specifically, existing NeRF-based methods face severe limitations in reconstructing high-resolution real scenes, for example, a large number of parameters, misalignment of the input data, and over-smoothing of details. In this paper, we present a novel and effective framework, called De-NeRF, based on NeRF and deformable convolutional network, to achieve high-fidelity view synthesis in ultra-high resolution scenes: (1) marrying the deformable convolution unit which can solve the problem of misaligned input of the high-resolution data. (2) Presenting a density sparse voxel-based approach which can greatly reduce the training time while rendering results with higher accuracy. Compared to existing high-resolution NeRF methods, our approach improves the rendering quality of high-frequency details and achieves better visual effects in 4K high-resolution scenes.

De-NeRF:采用可变形网排列的超高清 NeRF
神经辐射场(NeRF)可以渲染复杂的三维场景,并产生与视点相关的效果。然而,在探索其在高分辨率环境中的局限性方面,特别是在放大到超高分辨率(如 4k)时,研究较少。具体来说,现有的基于 NeRF 的方法在重建高分辨率真实场景时面临着严重的局限性,例如参数数量庞大、输入数据不对齐、细节过度平滑等。本文基于 NeRF 和可变形卷积网络,提出了一种新颖有效的框架,称为 De-NeRF,以实现超高分辨率场景中的高保真视图合成:(1)结合可变形卷积单元,解决高分辨率数据输入错位的问题。(2)提出一种基于密度稀疏体素的方法,可大大减少训练时间,同时呈现更高精度的结果。与现有的高分辨率 NeRF 方法相比,我们的方法提高了高频细节的渲染质量,在 4K 高分辨率场景中实现了更好的视觉效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Animation and Virtual Worlds
Computer Animation and Virtual Worlds 工程技术-计算机:软件工程
CiteScore
2.20
自引率
0.00%
发文量
90
审稿时长
6-12 weeks
期刊介绍: With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信