{"title":"基于光线分解和梯度约束的低光照条件下少镜头视图合成NeRF","authors":"Feng Wang, Liju Yin, Yiming Qin, Xiaoning Gao, Xiangyu Tang, Hui Zhou","doi":"10.1016/j.knosys.2025.114568","DOIUrl":null,"url":null,"abstract":"<div><div>Neural Radiance Fields (NeRF) have shown impressive performance in novel view synthesis, providing high-quality visual results for 3D reconstruction. However, existing NeRF-based methods often fail under extreme low-light conditions with sparse-view inputs, suffering from color distortion and degraded visual quality due to inaccurate illumination modeling and overfitting to limited views. To address these challenges, we propose R-GNeRF, a novel framework that leverages ray decomposition and gradient constraint. Specifically, we decompose sampled rays into reflective and illumination components, each modeled by an independent MLP in an unsupervised manner. A gradient constraint guides the network to learn physically plausible illumination fields, allowing the synthesis of novel views under normal lighting using only the reflective component. In addition, we introduce a view-consistency annealing strategy that adaptively adjusts the sampling sphere radius based on projection consistency across views, mitigating overfitting and improving reconstruction of fine details in few-shot synthesis. To evaluate performance under extreme low-light, we construct the 3L-P dataset using a multi-pixel photon counter (MPPC) at illuminance levels of <span><math><msup><mn>10</mn><mrow><mo>−</mo><mn>3</mn></mrow></msup></math></span> and <span><math><msup><mn>10</mn><mrow><mo>−</mo><mn>4</mn></mrow></msup></math></span> lux, providing challenging low-light images. Extensive experiments demonstrate that R-GNeRF consistently outperforms existing methods in low-light few-shot novel view synthesis, achieving higher visual fidelity and accurate depth reconstruction while maintaining efficient rendering.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"330 ","pages":"Article 114568"},"PeriodicalIF":7.6000,"publicationDate":"2025-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ray-decomposed and gradient-constrained NeRF for few-shot view synthesis under low-light conditions\",\"authors\":\"Feng Wang, Liju Yin, Yiming Qin, Xiaoning Gao, Xiangyu Tang, Hui Zhou\",\"doi\":\"10.1016/j.knosys.2025.114568\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Neural Radiance Fields (NeRF) have shown impressive performance in novel view synthesis, providing high-quality visual results for 3D reconstruction. However, existing NeRF-based methods often fail under extreme low-light conditions with sparse-view inputs, suffering from color distortion and degraded visual quality due to inaccurate illumination modeling and overfitting to limited views. To address these challenges, we propose R-GNeRF, a novel framework that leverages ray decomposition and gradient constraint. Specifically, we decompose sampled rays into reflective and illumination components, each modeled by an independent MLP in an unsupervised manner. A gradient constraint guides the network to learn physically plausible illumination fields, allowing the synthesis of novel views under normal lighting using only the reflective component. In addition, we introduce a view-consistency annealing strategy that adaptively adjusts the sampling sphere radius based on projection consistency across views, mitigating overfitting and improving reconstruction of fine details in few-shot synthesis. To evaluate performance under extreme low-light, we construct the 3L-P dataset using a multi-pixel photon counter (MPPC) at illuminance levels of <span><math><msup><mn>10</mn><mrow><mo>−</mo><mn>3</mn></mrow></msup></math></span> and <span><math><msup><mn>10</mn><mrow><mo>−</mo><mn>4</mn></mrow></msup></math></span> lux, providing challenging low-light images. Extensive experiments demonstrate that R-GNeRF consistently outperforms existing methods in low-light few-shot novel view synthesis, achieving higher visual fidelity and accurate depth reconstruction while maintaining efficient rendering.</div></div>\",\"PeriodicalId\":49939,\"journal\":{\"name\":\"Knowledge-Based Systems\",\"volume\":\"330 \",\"pages\":\"Article 114568\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge-Based Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0950705125016077\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705125016077","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Ray-decomposed and gradient-constrained NeRF for few-shot view synthesis under low-light conditions
Neural Radiance Fields (NeRF) have shown impressive performance in novel view synthesis, providing high-quality visual results for 3D reconstruction. However, existing NeRF-based methods often fail under extreme low-light conditions with sparse-view inputs, suffering from color distortion and degraded visual quality due to inaccurate illumination modeling and overfitting to limited views. To address these challenges, we propose R-GNeRF, a novel framework that leverages ray decomposition and gradient constraint. Specifically, we decompose sampled rays into reflective and illumination components, each modeled by an independent MLP in an unsupervised manner. A gradient constraint guides the network to learn physically plausible illumination fields, allowing the synthesis of novel views under normal lighting using only the reflective component. In addition, we introduce a view-consistency annealing strategy that adaptively adjusts the sampling sphere radius based on projection consistency across views, mitigating overfitting and improving reconstruction of fine details in few-shot synthesis. To evaluate performance under extreme low-light, we construct the 3L-P dataset using a multi-pixel photon counter (MPPC) at illuminance levels of and lux, providing challenging low-light images. Extensive experiments demonstrate that R-GNeRF consistently outperforms existing methods in low-light few-shot novel view synthesis, achieving higher visual fidelity and accurate depth reconstruction while maintaining efficient rendering.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.