{"title":"基于MSAF-DT的DNeRF图像去噪方法","authors":"Wenxuan Xu, Meng Huang, Qian Xu","doi":"10.1049/ipr2.70122","DOIUrl":null,"url":null,"abstract":"<p>Rendering novel and realistic images is crucial in applications such as augmented reality, virtual reality, 3D content creation, gaming, and the film industry. However, dynamic image rendering often suffers from significant noise, which compromises clarity and realism. Dynamic-Neural Radiance Fields (D-NeRF), an extension of the original NeRF model, addresses this challenge by enabling the rendering of dynamic images. Despite its advantages, D-NeRF often generates significant noise in the rendered images. Addressing this limitation, this paper proposes a Transformer-based model, Multi-Scale Attention Fusion Denoise Transformer (MSAF-DT), designed to enhance the clarity of rendered images. MSAF-DT constructs a deep neural network by stacking multiple Transformer blocks, with each block adaptively extracting complex features and dependencies from the data. The multi-head self-attention (MHSA) mechanism effectively captures long-range dependencies, which is crucial for processing sequences in dynamic radiance fields. Additionally, the model supports parallel processing of the entire sequence, significantly enhancing training efficiency. This design enables MSAF-DT to handle the noise present in D-NeRF outputs while preserving essential features. Experimental results on the Nerf_Synthetic dataset demonstrate that the proposed method outperforms D-NeRF in both image clarity and processing efficiency, achieving higher PSNR scores and faster convergence during training.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70122","citationCount":"0","resultStr":"{\"title\":\"A DNeRF Image Denoising Method Based on MSAF-DT\",\"authors\":\"Wenxuan Xu, Meng Huang, Qian Xu\",\"doi\":\"10.1049/ipr2.70122\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Rendering novel and realistic images is crucial in applications such as augmented reality, virtual reality, 3D content creation, gaming, and the film industry. However, dynamic image rendering often suffers from significant noise, which compromises clarity and realism. Dynamic-Neural Radiance Fields (D-NeRF), an extension of the original NeRF model, addresses this challenge by enabling the rendering of dynamic images. Despite its advantages, D-NeRF often generates significant noise in the rendered images. Addressing this limitation, this paper proposes a Transformer-based model, Multi-Scale Attention Fusion Denoise Transformer (MSAF-DT), designed to enhance the clarity of rendered images. MSAF-DT constructs a deep neural network by stacking multiple Transformer blocks, with each block adaptively extracting complex features and dependencies from the data. The multi-head self-attention (MHSA) mechanism effectively captures long-range dependencies, which is crucial for processing sequences in dynamic radiance fields. Additionally, the model supports parallel processing of the entire sequence, significantly enhancing training efficiency. This design enables MSAF-DT to handle the noise present in D-NeRF outputs while preserving essential features. Experimental results on the Nerf_Synthetic dataset demonstrate that the proposed method outperforms D-NeRF in both image clarity and processing efficiency, achieving higher PSNR scores and faster convergence during training.</p>\",\"PeriodicalId\":56303,\"journal\":{\"name\":\"IET Image Processing\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2025-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70122\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Image Processing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/ipr2.70122\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Image Processing","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/ipr2.70122","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Rendering novel and realistic images is crucial in applications such as augmented reality, virtual reality, 3D content creation, gaming, and the film industry. However, dynamic image rendering often suffers from significant noise, which compromises clarity and realism. Dynamic-Neural Radiance Fields (D-NeRF), an extension of the original NeRF model, addresses this challenge by enabling the rendering of dynamic images. Despite its advantages, D-NeRF often generates significant noise in the rendered images. Addressing this limitation, this paper proposes a Transformer-based model, Multi-Scale Attention Fusion Denoise Transformer (MSAF-DT), designed to enhance the clarity of rendered images. MSAF-DT constructs a deep neural network by stacking multiple Transformer blocks, with each block adaptively extracting complex features and dependencies from the data. The multi-head self-attention (MHSA) mechanism effectively captures long-range dependencies, which is crucial for processing sequences in dynamic radiance fields. Additionally, the model supports parallel processing of the entire sequence, significantly enhancing training efficiency. This design enables MSAF-DT to handle the noise present in D-NeRF outputs while preserving essential features. Experimental results on the Nerf_Synthetic dataset demonstrate that the proposed method outperforms D-NeRF in both image clarity and processing efficiency, achieving higher PSNR scores and faster convergence during training.
期刊介绍:
The IET Image Processing journal encompasses research areas related to the generation, processing and communication of visual information. The focus of the journal is the coverage of the latest research results in image and video processing, including image generation and display, enhancement and restoration, segmentation, colour and texture analysis, coding and communication, implementations and architectures as well as innovative applications.
Principal topics include:
Generation and Display - Imaging sensors and acquisition systems, illumination, sampling and scanning, quantization, colour reproduction, image rendering, display and printing systems, evaluation of image quality.
Processing and Analysis - Image enhancement, restoration, segmentation, registration, multispectral, colour and texture processing, multiresolution processing and wavelets, morphological operations, stereoscopic and 3-D processing, motion detection and estimation, video and image sequence processing.
Implementations and Architectures - Image and video processing hardware and software, design and construction, architectures and software, neural, adaptive, and fuzzy processing.
Coding and Transmission - Image and video compression and coding, compression standards, noise modelling, visual information networks, streamed video.
Retrieval and Multimedia - Storage of images and video, database design, image retrieval, video annotation and editing, mixed media incorporating visual information, multimedia systems and applications, image and video watermarking, steganography.
Applications - Innovative application of image and video processing technologies to any field, including life sciences, earth sciences, astronomy, document processing and security.
Current Special Issue Call for Papers:
Evolutionary Computation for Image Processing - https://digital-library.theiet.org/files/IET_IPR_CFP_EC.pdf
AI-Powered 3D Vision - https://digital-library.theiet.org/files/IET_IPR_CFP_AIPV.pdf
Multidisciplinary advancement of Imaging Technologies: From Medical Diagnostics and Genomics to Cognitive Machine Vision, and Artificial Intelligence - https://digital-library.theiet.org/files/IET_IPR_CFP_IST.pdf
Deep Learning for 3D Reconstruction - https://digital-library.theiet.org/files/IET_IPR_CFP_DLR.pdf