{"title":"MD-NeRF: Enhancing Large-Scale Scene Rendering and Synthesis With Hybrid Point Sampling and Adaptive Scene Decomposition","authors":"Yichen Zhang;Zhi Gao;Wenbo Sun;Yao Lu;Yuhan Zhu","doi":"10.1109/LGRS.2024.3492208","DOIUrl":null,"url":null,"abstract":"Neural radiance fields (NeRFs) have gained great success in 3-D representation and novel-view synthesis, which attracted great efforts devoted to this area. However, when rendering large-scale scenes from a drone perspective, existing NeRF methods exhibit pronounced distortions in scene detail including absent textures and blurring of small objects. In this letter, we propose MD-NeRF to mitigate such distortions by integrating a hybrid sampling strategy and an adaptive scene decomposition method. Specifically, an anti-aliasing sampling method combining spiral sampling and sampling along rays is presented to address rendering anomalies. In addition, we decompose a large scene into multiple subscenes using a mixture of expert (MoE) modules. A shared expert is introduced to capture common features and reduce redundancy across the specialized experts. Consequently, the combination of these two methods effectively minimizes distortions when rendering large-scale scenes and enables our model to produce finer textures and more coherent details. We have conducted extensive experiments on several large-scale unbounded scene datasets, and the results demonstrate that our approach has achieved state-of-the-art performance on all datasets, most notably evidenced by a 1-dB enhancement in PSNR metrics on the Mill19 dataset.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"21 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10745540/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Neural radiance fields (NeRFs) have gained great success in 3-D representation and novel-view synthesis, which attracted great efforts devoted to this area. However, when rendering large-scale scenes from a drone perspective, existing NeRF methods exhibit pronounced distortions in scene detail including absent textures and blurring of small objects. In this letter, we propose MD-NeRF to mitigate such distortions by integrating a hybrid sampling strategy and an adaptive scene decomposition method. Specifically, an anti-aliasing sampling method combining spiral sampling and sampling along rays is presented to address rendering anomalies. In addition, we decompose a large scene into multiple subscenes using a mixture of expert (MoE) modules. A shared expert is introduced to capture common features and reduce redundancy across the specialized experts. Consequently, the combination of these two methods effectively minimizes distortions when rendering large-scale scenes and enables our model to produce finer textures and more coherent details. We have conducted extensive experiments on several large-scale unbounded scene datasets, and the results demonstrate that our approach has achieved state-of-the-art performance on all datasets, most notably evidenced by a 1-dB enhancement in PSNR metrics on the Mill19 dataset.