MixRF: Universal Mixed Radiance Fields with Points and Rays Aggregation.

Haiyang Bai, Tao Lu, Jiaqi Zhu, Wei Huang, Chang Gou, Jie Guo, Lijun Chen, Yanwen Guo
{"title":"MixRF: Universal Mixed Radiance Fields with Points and Rays Aggregation.","authors":"Haiyang Bai, Tao Lu, Jiaqi Zhu, Wei Huang, Chang Gou, Jie Guo, Lijun Chen, Yanwen Guo","doi":"10.1109/TVCG.2025.3572015","DOIUrl":null,"url":null,"abstract":"<p><p>Recent advancements in neural rendering methods, such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3D-GS), have significantly revolutionized photo-realistic novel view synthesis of scenes with multiple photos or videos as input. However, existing approaches within the NeRF and 3D-GS frameworks often assume the independence of point sampling and ray casting, which are intrinsic to volume rendering and alpha-blending techniques. These underlying assumptions limit the ability to aggregate context within subspaces, such as densities and colors in the radiance fields and pixels on the image plane, leading to synthesized images that lack fine details and smoothness. To overcome this, we propose a universal framework, MixRF, comprising a Radiance Field Mixer (RF-mixer) and a Color Domain Mixer (CD-mixer), to sufficiently aggregate and fully explore information in neighboring sampled points and casting rays, separately. The RF-mixer treats sampled points as an explicit point cloud, enabling the aggregation of density and color attributes from neighboring points to better capture local geometry and appearance. Meanwhile, the CD-mixer rearranges rendered pixels on the sub-image plane, improving smoothness and recovering fine details and textures. Both mixers employ a kernel-based mixing strategy to facilitate effective and controllable attribute aggregation, ensuring a more comprehensive exploration of radiance values and pixel information. Extensive experiments demonstrate that our MixRF framework is compatible with radiance field-based methods, including NeRF and 3D-GS designs. The proposed framework dramatically enhances performance in both qualitative and quantitative evaluations, with less than a $ 25\\%$ increase in computational overhead during inference.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3572015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advancements in neural rendering methods, such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3D-GS), have significantly revolutionized photo-realistic novel view synthesis of scenes with multiple photos or videos as input. However, existing approaches within the NeRF and 3D-GS frameworks often assume the independence of point sampling and ray casting, which are intrinsic to volume rendering and alpha-blending techniques. These underlying assumptions limit the ability to aggregate context within subspaces, such as densities and colors in the radiance fields and pixels on the image plane, leading to synthesized images that lack fine details and smoothness. To overcome this, we propose a universal framework, MixRF, comprising a Radiance Field Mixer (RF-mixer) and a Color Domain Mixer (CD-mixer), to sufficiently aggregate and fully explore information in neighboring sampled points and casting rays, separately. The RF-mixer treats sampled points as an explicit point cloud, enabling the aggregation of density and color attributes from neighboring points to better capture local geometry and appearance. Meanwhile, the CD-mixer rearranges rendered pixels on the sub-image plane, improving smoothness and recovering fine details and textures. Both mixers employ a kernel-based mixing strategy to facilitate effective and controllable attribute aggregation, ensuring a more comprehensive exploration of radiance values and pixel information. Extensive experiments demonstrate that our MixRF framework is compatible with radiance field-based methods, including NeRF and 3D-GS designs. The proposed framework dramatically enhances performance in both qualitative and quantitative evaluations, with less than a $ 25\%$ increase in computational overhead during inference.

MixRF:具有点和射线聚合的通用混合辐射场。
神经渲染方法的最新进展,如神经辐射场(NeRF)和3D高斯飞溅(3D- gs),极大地改变了以多张照片或视频作为输入的逼真场景的新视图合成。然而,NeRF和3D-GS框架中的现有方法通常假设点采样和光线投射的独立性,这是体渲染和alpha混合技术固有的。这些潜在的假设限制了在子空间中聚合上下文的能力,例如辐射场中的密度和颜色以及图像平面上的像素,从而导致合成的图像缺乏精细的细节和平滑度。为了克服这个问题,我们提出了一个通用框架MixRF,它包括一个辐光场混频器(RF-mixer)和一个色域混频器(CD-mixer),以充分聚合和充分探索相邻采样点和投射光线中的信息。rf混频器将采样点视为一个明确的点云,允许从邻近点聚集密度和颜色属性,以更好地捕获局部几何形状和外观。同时,CD-mixer在子图像平面上重新排列渲染像素,提高平滑度并恢复精细细节和纹理。两个mixers都采用了基于核的混合策略,以促进有效和可控的属性聚合,确保更全面地探索亮度值和像素信息。大量的实验表明,我们的MixRF框架与基于辐射场的方法兼容,包括NeRF和3D-GS设计。所提出的框架显著提高了定性和定量评估的性能,在推理期间的计算开销增加不到25%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信