MD-NeRF:利用混合点采样和自适应场景分解增强大规模场景渲染和合成功能

Yichen Zhang;Zhi Gao;Wenbo Sun;Yao Lu;Yuhan Zhu
{"title":"MD-NeRF:利用混合点采样和自适应场景分解增强大规模场景渲染和合成功能","authors":"Yichen Zhang;Zhi Gao;Wenbo Sun;Yao Lu;Yuhan Zhu","doi":"10.1109/LGRS.2024.3492208","DOIUrl":null,"url":null,"abstract":"Neural radiance fields (NeRFs) have gained great success in 3-D representation and novel-view synthesis, which attracted great efforts devoted to this area. However, when rendering large-scale scenes from a drone perspective, existing NeRF methods exhibit pronounced distortions in scene detail including absent textures and blurring of small objects. In this letter, we propose MD-NeRF to mitigate such distortions by integrating a hybrid sampling strategy and an adaptive scene decomposition method. Specifically, an anti-aliasing sampling method combining spiral sampling and sampling along rays is presented to address rendering anomalies. In addition, we decompose a large scene into multiple subscenes using a mixture of expert (MoE) modules. A shared expert is introduced to capture common features and reduce redundancy across the specialized experts. Consequently, the combination of these two methods effectively minimizes distortions when rendering large-scale scenes and enables our model to produce finer textures and more coherent details. We have conducted extensive experiments on several large-scale unbounded scene datasets, and the results demonstrate that our approach has achieved state-of-the-art performance on all datasets, most notably evidenced by a 1-dB enhancement in PSNR metrics on the Mill19 dataset.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"21 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MD-NeRF: Enhancing Large-Scale Scene Rendering and Synthesis With Hybrid Point Sampling and Adaptive Scene Decomposition\",\"authors\":\"Yichen Zhang;Zhi Gao;Wenbo Sun;Yao Lu;Yuhan Zhu\",\"doi\":\"10.1109/LGRS.2024.3492208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural radiance fields (NeRFs) have gained great success in 3-D representation and novel-view synthesis, which attracted great efforts devoted to this area. However, when rendering large-scale scenes from a drone perspective, existing NeRF methods exhibit pronounced distortions in scene detail including absent textures and blurring of small objects. In this letter, we propose MD-NeRF to mitigate such distortions by integrating a hybrid sampling strategy and an adaptive scene decomposition method. Specifically, an anti-aliasing sampling method combining spiral sampling and sampling along rays is presented to address rendering anomalies. In addition, we decompose a large scene into multiple subscenes using a mixture of expert (MoE) modules. A shared expert is introduced to capture common features and reduce redundancy across the specialized experts. Consequently, the combination of these two methods effectively minimizes distortions when rendering large-scale scenes and enables our model to produce finer textures and more coherent details. We have conducted extensive experiments on several large-scale unbounded scene datasets, and the results demonstrate that our approach has achieved state-of-the-art performance on all datasets, most notably evidenced by a 1-dB enhancement in PSNR metrics on the Mill19 dataset.\",\"PeriodicalId\":91017,\"journal\":{\"name\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"volume\":\"21 \",\"pages\":\"1-5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10745540/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10745540/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

神经辐射场(NeRF)在三维表示和新颖视角合成方面取得了巨大成功,吸引了人们对这一领域的极大关注。然而,当从无人机视角渲染大型场景时,现有的神经辐射场方法会对场景细节产生明显的扭曲,包括纹理缺失和小物体模糊。在这封信中,我们提出了 MD-NeRF 方法,通过整合混合采样策略和自适应场景分解方法来减轻这种失真。具体来说,我们提出了一种结合螺旋采样和沿射线采样的抗锯齿采样方法,以解决渲染异常问题。此外,我们还使用混合专家(MoE)模块将大型场景分解为多个子场景。我们引入了共享专家来捕捉共同特征,减少各专业专家之间的冗余。因此,这两种方法的结合能有效减少大型场景渲染时的失真,并使我们的模型能生成更精细的纹理和更连贯的细节。我们在多个大规模无边界场景数据集上进行了广泛的实验,结果表明我们的方法在所有数据集上都取得了最先进的性能,最显著的表现是在 Mill19 数据集上的 PSNR 指标提高了 1 分贝。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MD-NeRF: Enhancing Large-Scale Scene Rendering and Synthesis With Hybrid Point Sampling and Adaptive Scene Decomposition
Neural radiance fields (NeRFs) have gained great success in 3-D representation and novel-view synthesis, which attracted great efforts devoted to this area. However, when rendering large-scale scenes from a drone perspective, existing NeRF methods exhibit pronounced distortions in scene detail including absent textures and blurring of small objects. In this letter, we propose MD-NeRF to mitigate such distortions by integrating a hybrid sampling strategy and an adaptive scene decomposition method. Specifically, an anti-aliasing sampling method combining spiral sampling and sampling along rays is presented to address rendering anomalies. In addition, we decompose a large scene into multiple subscenes using a mixture of expert (MoE) modules. A shared expert is introduced to capture common features and reduce redundancy across the specialized experts. Consequently, the combination of these two methods effectively minimizes distortions when rendering large-scale scenes and enables our model to produce finer textures and more coherent details. We have conducted extensive experiments on several large-scale unbounded scene datasets, and the results demonstrate that our approach has achieved state-of-the-art performance on all datasets, most notably evidenced by a 1-dB enhancement in PSNR metrics on the Mill19 dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信