Uchitha Rajapaksha , Hamid Laga , Dean Diepeveen , Mohammed Bennamoun , Ferdous Sohel
{"title":"Conditional plane-based multi-scene representation for novel view synthesis","authors":"Uchitha Rajapaksha , Hamid Laga , Dean Diepeveen , Mohammed Bennamoun , Ferdous Sohel","doi":"10.1016/j.neucom.2025.131657","DOIUrl":null,"url":null,"abstract":"<div><div>Existing explicit and implicit-explicit hybrid neural representations for novel view synthesis are scene-specific. In other words, they represent only a single scene and require retraining for every novel scene. Implicit scene-agnostic methods rely on large multilayer perception (MLP) networks conditioned on learned features. They are computationally expensive during training and rendering times. In contrast, we propose a novel plane-based representation that learns to represent multiple static and dynamic scenes during training and renders per-scene novel views during inference. The method consists of a deformation network, explicit feature planes, and a conditional decoder. Explicit feature planes are used to represent a time-stamped view space volume and a shared canonical volume across multiple scenes. The deformation network learns the deformations across shared canonical object space and time-stamped view space. The conditional decoder estimates the color and density of each scene constrained by a scene-specific latent code. We evaluated and compared the performance of the proposed representation on static (NeRF) and dynamic (Plenoptic videos) datasets. The results show that explicit planes combined with tiny MLPs can efficiently train multiple scenes simultaneously. The project page: <span><span>https://anonpubcv.github.io/cplanes/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"657 ","pages":"Article 131657"},"PeriodicalIF":6.5000,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122502329X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Existing explicit and implicit-explicit hybrid neural representations for novel view synthesis are scene-specific. In other words, they represent only a single scene and require retraining for every novel scene. Implicit scene-agnostic methods rely on large multilayer perception (MLP) networks conditioned on learned features. They are computationally expensive during training and rendering times. In contrast, we propose a novel plane-based representation that learns to represent multiple static and dynamic scenes during training and renders per-scene novel views during inference. The method consists of a deformation network, explicit feature planes, and a conditional decoder. Explicit feature planes are used to represent a time-stamped view space volume and a shared canonical volume across multiple scenes. The deformation network learns the deformations across shared canonical object space and time-stamped view space. The conditional decoder estimates the color and density of each scene constrained by a scene-specific latent code. We evaluated and compared the performance of the proposed representation on static (NeRF) and dynamic (Plenoptic videos) datasets. The results show that explicit planes combined with tiny MLPs can efficiently train multiple scenes simultaneously. The project page: https://anonpubcv.github.io/cplanes/.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.