Shihao Luo;Nguyen Tien Phong;Chibuike Onuoha;Truong Cong Thang
{"title":"Perceptual Quality Assessment for NeRF-Generated Scenes: A Training Reference Metric","authors":"Shihao Luo;Nguyen Tien Phong;Chibuike Onuoha;Truong Cong Thang","doi":"10.1109/ACCESS.2025.3603970","DOIUrl":null,"url":null,"abstract":"Perceptual quality assessment is a key challenge in traditional image processing as well as emerging AI-based novel view synthesis methods such as neural radiation fields (NeRF). NeRF has revolutionized 3D scene reconstruction by leveraging neural networks for volumetric rendering, achieving unprecedented photorealistic results. Currently, the perceptual quality assessment of NeRF still relies heavily on full-reference (FR) metrics, such as PSNR and SSIM, which require external reference images from predefined camera positions and suffer from significant limitations. In this paper, we propose a new quality metric that directly leverages training views to quantify the perceptual quality of NeRF-generated scenes, eliminating the need for external predefined reference images and camera position metadata. In the proposed approach, we first extracted hierarchical and abstract features from training views using pretrained deep convolutional neural networks (CNNs) and then constructed a reference representation for perceptual quality evaluation through feature space interpolation. We evaluated the effectiveness of the proposed approach with two options: one with pretrained CNNs only (without calibration) and another with calibration applied to learn the importance of hierarchical feature stages. The experimental results demonstrate the effectiveness of the proposed method, which outperforms traditional no-reference (NR) metrics while being comparable to popular FR metrics. We found that deep features trained for high-level classification tasks have a strong potential to quantify perceptual quality across different viewpoints of the same object in NeRF. The code is released at <uri>https://github.com/WemoLuo/NVQS</uri>","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"152277-152292"},"PeriodicalIF":3.6000,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11145061","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11145061/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Perceptual quality assessment is a key challenge in traditional image processing as well as emerging AI-based novel view synthesis methods such as neural radiation fields (NeRF). NeRF has revolutionized 3D scene reconstruction by leveraging neural networks for volumetric rendering, achieving unprecedented photorealistic results. Currently, the perceptual quality assessment of NeRF still relies heavily on full-reference (FR) metrics, such as PSNR and SSIM, which require external reference images from predefined camera positions and suffer from significant limitations. In this paper, we propose a new quality metric that directly leverages training views to quantify the perceptual quality of NeRF-generated scenes, eliminating the need for external predefined reference images and camera position metadata. In the proposed approach, we first extracted hierarchical and abstract features from training views using pretrained deep convolutional neural networks (CNNs) and then constructed a reference representation for perceptual quality evaluation through feature space interpolation. We evaluated the effectiveness of the proposed approach with two options: one with pretrained CNNs only (without calibration) and another with calibration applied to learn the importance of hierarchical feature stages. The experimental results demonstrate the effectiveness of the proposed method, which outperforms traditional no-reference (NR) metrics while being comparable to popular FR metrics. We found that deep features trained for high-level classification tasks have a strong potential to quantify perceptual quality across different viewpoints of the same object in NeRF. The code is released at https://github.com/WemoLuo/NVQS
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.