{"title":"一种遮挡光场稀疏贝叶斯学习模型用于视图合成","authors":"Weiyan Chen , Changjian Zhu , Shan Zhang","doi":"10.1016/j.dsp.2025.105573","DOIUrl":null,"url":null,"abstract":"<div><div>Given a set of captured views with known positions, our goal is to obtain different views from new positions. However, synthesizing novel views from new positions is challenging since occlusion in real-world scenes is complex and ubiquitous. In this paper, we describe a method for synthesizing a novel view of an occluded scene, that is, an occlusion light field (OLF) sparse Bayesian learning network (OLiFi-Net). Specifically, we break down the process into OLF parameterization and interpolation reconstruction components. For the first component, we utilize a sparse Bayesian learning approach to establish an OLF expression. This expression can then be used to derive the convolution interpolation kernel function. For the second component, the kernel function can be applied to the circular convolutional network to synthesize novel views in a variety of occlusion situations. The reconstruction results on extensive datasets validate our model and demonstrate that we can render views for both occluded and nonoccluded scenes.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"168 ","pages":"Article 105573"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An occlusion light field sparse Bayesian learning model for view synthesis\",\"authors\":\"Weiyan Chen , Changjian Zhu , Shan Zhang\",\"doi\":\"10.1016/j.dsp.2025.105573\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Given a set of captured views with known positions, our goal is to obtain different views from new positions. However, synthesizing novel views from new positions is challenging since occlusion in real-world scenes is complex and ubiquitous. In this paper, we describe a method for synthesizing a novel view of an occluded scene, that is, an occlusion light field (OLF) sparse Bayesian learning network (OLiFi-Net). Specifically, we break down the process into OLF parameterization and interpolation reconstruction components. For the first component, we utilize a sparse Bayesian learning approach to establish an OLF expression. This expression can then be used to derive the convolution interpolation kernel function. For the second component, the kernel function can be applied to the circular convolutional network to synthesize novel views in a variety of occlusion situations. The reconstruction results on extensive datasets validate our model and demonstrate that we can render views for both occluded and nonoccluded scenes.</div></div>\",\"PeriodicalId\":51011,\"journal\":{\"name\":\"Digital Signal Processing\",\"volume\":\"168 \",\"pages\":\"Article 105573\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1051200425005950\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051200425005950","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
An occlusion light field sparse Bayesian learning model for view synthesis
Given a set of captured views with known positions, our goal is to obtain different views from new positions. However, synthesizing novel views from new positions is challenging since occlusion in real-world scenes is complex and ubiquitous. In this paper, we describe a method for synthesizing a novel view of an occluded scene, that is, an occlusion light field (OLF) sparse Bayesian learning network (OLiFi-Net). Specifically, we break down the process into OLF parameterization and interpolation reconstruction components. For the first component, we utilize a sparse Bayesian learning approach to establish an OLF expression. This expression can then be used to derive the convolution interpolation kernel function. For the second component, the kernel function can be applied to the circular convolutional network to synthesize novel views in a variety of occlusion situations. The reconstruction results on extensive datasets validate our model and demonstrate that we can render views for both occluded and nonoccluded scenes.
期刊介绍:
Digital Signal Processing: A Review Journal is one of the oldest and most established journals in the field of signal processing yet it aims to be the most innovative. The Journal invites top quality research articles at the frontiers of research in all aspects of signal processing. Our objective is to provide a platform for the publication of ground-breaking research in signal processing with both academic and industrial appeal.
The journal has a special emphasis on statistical signal processing methodology such as Bayesian signal processing, and encourages articles on emerging applications of signal processing such as:
• big data• machine learning• internet of things• information security• systems biology and computational biology,• financial time series analysis,• autonomous vehicles,• quantum computing,• neuromorphic engineering,• human-computer interaction and intelligent user interfaces,• environmental signal processing,• geophysical signal processing including seismic signal processing,• chemioinformatics and bioinformatics,• audio, visual and performance arts,• disaster management and prevention,• renewable energy,