Qiang Liu , Teng Wang , Zhiguo Zhang , Jun Nie , Xiao Lu , Chunyang Sheng , Shibin Song , Qiaoqiao Sun , Haixia Wang
{"title":"HiSURF: Hierarchical semantic-guided unified radiance field for generalizing across unseen scenes","authors":"Qiang Liu , Teng Wang , Zhiguo Zhang , Jun Nie , Xiao Lu , Chunyang Sheng , Shibin Song , Qiaoqiao Sun , Haixia Wang","doi":"10.1016/j.knosys.2026.115530","DOIUrl":null,"url":null,"abstract":"<div><div>Recent advancements in neural field representations have significantly improved novel view synthesis for seen scenes. However, generalizing seen representations to unseen scenes remains challenging. Addressing this problem, we propose the Hierarchical Semantic-guided Unified Radiance Field (HiSURF) to leverage hierarchical semantic attributes from seen scenes as prior knowledge. The synthesis of scene representations for unseen environments can be enabled by establishing an interpretable mapping between semantic attributes and visual features. Specifically, HiSURF consists of a local semantic embedding module, a global semantic mapping module, and a composite rendering module. For a scene with multiple objects, the local module disentangles attributes of objects to generate fine object-level triplanes, which preserve structural and surface details for objects. At the same time, the global module utilizes attributes of the holistic scene to construct a coarse scene-level triplane, which ensures layout consistency and contextual coherence for the scene. Then, the composite rendering module integrates features from both object-level and scene-level triplanes for high-quality novel view synthesis. Experimental results on the ClevrTex and Kubric datasets demonstrate that our HiSURF not only outperforms existing approaches in novel view synthesis but also exhibits superior generalization capability to unseen scenes.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"338 ","pages":"Article 115530"},"PeriodicalIF":7.6000,"publicationDate":"2026-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950705126002728","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/2/10 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in neural field representations have significantly improved novel view synthesis for seen scenes. However, generalizing seen representations to unseen scenes remains challenging. Addressing this problem, we propose the Hierarchical Semantic-guided Unified Radiance Field (HiSURF) to leverage hierarchical semantic attributes from seen scenes as prior knowledge. The synthesis of scene representations for unseen environments can be enabled by establishing an interpretable mapping between semantic attributes and visual features. Specifically, HiSURF consists of a local semantic embedding module, a global semantic mapping module, and a composite rendering module. For a scene with multiple objects, the local module disentangles attributes of objects to generate fine object-level triplanes, which preserve structural and surface details for objects. At the same time, the global module utilizes attributes of the holistic scene to construct a coarse scene-level triplane, which ensures layout consistency and contextual coherence for the scene. Then, the composite rendering module integrates features from both object-level and scene-level triplanes for high-quality novel view synthesis. Experimental results on the ClevrTex and Kubric datasets demonstrate that our HiSURF not only outperforms existing approaches in novel view synthesis but also exhibits superior generalization capability to unseen scenes.
期刊介绍:
Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.