{"title":"环境-神经辐射场:用环境照明增强弱光条件下神经辐射场的光列","authors":"Peng Zhang, Gengsheng Hu, Mei Chen, Mahmoud Emam","doi":"10.1007/s11042-024-19699-3","DOIUrl":null,"url":null,"abstract":"<p>NeRF can render photorealistic 3D scenes. It is widely used in virtual reality, autonomous driving, game development and other fields, and quickly becomes one of the most popular technologies in the field of 3D reconstruction. NeRF generates a realistic 3D scene by emitting light from the camera’s spatial coordinates and viewpoint, passing through the scene and calculating the view seen from the viewpoint. However, when the brightness of the original input image is low, it is difficult to recover the scene. Inspired by the ambient illumination in the Phong model of computer graphics, it is assumed that the final rendered image is the product of scene color and ambient illumination. In this paper, we employ Multi-Layer Perceptron (MLP) network to train the ambient illumination tensor <span>\\(\\textbf{I}\\)</span>, which is multiplied by the color predicted by NeRF to render images with normal illumination. Furthermore, we use tiny-cuda-nn as a backbone network to simplify the proposed network structure and greatly improve the training speed. Additionally, a new loss function is introduced to achieve a better image quality under low illumination conditions. The experimental results demonstrate the efficiency of the proposed method in enhancing low-light scene images compared with other state-of-the-art methods, with an overall average of PSNR: 20.53 , SSIM: 0.785, and LPIPS: 0.258 on the LOM dataset.</p>","PeriodicalId":18770,"journal":{"name":"Multimedia Tools and Applications","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ambient-NeRF: light train enhancing neural radiance fields in low-light conditions with ambient-illumination\",\"authors\":\"Peng Zhang, Gengsheng Hu, Mei Chen, Mahmoud Emam\",\"doi\":\"10.1007/s11042-024-19699-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>NeRF can render photorealistic 3D scenes. It is widely used in virtual reality, autonomous driving, game development and other fields, and quickly becomes one of the most popular technologies in the field of 3D reconstruction. NeRF generates a realistic 3D scene by emitting light from the camera’s spatial coordinates and viewpoint, passing through the scene and calculating the view seen from the viewpoint. However, when the brightness of the original input image is low, it is difficult to recover the scene. Inspired by the ambient illumination in the Phong model of computer graphics, it is assumed that the final rendered image is the product of scene color and ambient illumination. In this paper, we employ Multi-Layer Perceptron (MLP) network to train the ambient illumination tensor <span>\\\\(\\\\textbf{I}\\\\)</span>, which is multiplied by the color predicted by NeRF to render images with normal illumination. Furthermore, we use tiny-cuda-nn as a backbone network to simplify the proposed network structure and greatly improve the training speed. Additionally, a new loss function is introduced to achieve a better image quality under low illumination conditions. The experimental results demonstrate the efficiency of the proposed method in enhancing low-light scene images compared with other state-of-the-art methods, with an overall average of PSNR: 20.53 , SSIM: 0.785, and LPIPS: 0.258 on the LOM dataset.</p>\",\"PeriodicalId\":18770,\"journal\":{\"name\":\"Multimedia Tools and Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multimedia Tools and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11042-024-19699-3\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multimedia Tools and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11042-024-19699-3","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Ambient-NeRF: light train enhancing neural radiance fields in low-light conditions with ambient-illumination
NeRF can render photorealistic 3D scenes. It is widely used in virtual reality, autonomous driving, game development and other fields, and quickly becomes one of the most popular technologies in the field of 3D reconstruction. NeRF generates a realistic 3D scene by emitting light from the camera’s spatial coordinates and viewpoint, passing through the scene and calculating the view seen from the viewpoint. However, when the brightness of the original input image is low, it is difficult to recover the scene. Inspired by the ambient illumination in the Phong model of computer graphics, it is assumed that the final rendered image is the product of scene color and ambient illumination. In this paper, we employ Multi-Layer Perceptron (MLP) network to train the ambient illumination tensor \(\textbf{I}\), which is multiplied by the color predicted by NeRF to render images with normal illumination. Furthermore, we use tiny-cuda-nn as a backbone network to simplify the proposed network structure and greatly improve the training speed. Additionally, a new loss function is introduced to achieve a better image quality under low illumination conditions. The experimental results demonstrate the efficiency of the proposed method in enhancing low-light scene images compared with other state-of-the-art methods, with an overall average of PSNR: 20.53 , SSIM: 0.785, and LPIPS: 0.258 on the LOM dataset.
期刊介绍:
Multimedia Tools and Applications publishes original research articles on multimedia development and system support tools as well as case studies of multimedia applications. It also features experimental and survey articles. The journal is intended for academics, practitioners, scientists and engineers who are involved in multimedia system research, design and applications. All papers are peer reviewed.
Specific areas of interest include:
- Multimedia Tools:
- Multimedia Applications:
- Prototype multimedia systems and platforms