Yupeng Xie, André Souto, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, G. Lafruit
{"title":"Performance analysis of DIBR-based view synthesis with kinect azure","authors":"Yupeng Xie, André Souto, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, G. Lafruit","doi":"10.1109/IC3D53758.2021.9687195","DOIUrl":null,"url":null,"abstract":"DIBR (Depth Image Based Rendering) can synthesize Free Navigation virtual views with sparse multiview texture images and corresponding depth maps. There are two ways to obtain depth maps: through software or depth sensors, which is a trade-off between precision versus speed (computational cost and processing time). This article compares the performance of depth maps estimated by MPEG-I’s Depth Estimation Reference Software with that acquired by Kinect Azure. We use IV-PSNR to evaluate their depth maps-based virtual views for the objective comparison. The quality metric with Kinect Azure regularly stay around 32 dB, and its active depth maps yields view synthesis results with better subjective performance in low-textured areas than DERS. Hence, we observe a worthy trade-off in depth performance between Kinect Azure and DERS, but with an advantage of negligible computational cost from the former. We recommend the Kinect Azure for real-time DIBR applications.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on 3D Immersion (IC3D)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3D53758.2021.9687195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
DIBR (Depth Image Based Rendering) can synthesize Free Navigation virtual views with sparse multiview texture images and corresponding depth maps. There are two ways to obtain depth maps: through software or depth sensors, which is a trade-off between precision versus speed (computational cost and processing time). This article compares the performance of depth maps estimated by MPEG-I’s Depth Estimation Reference Software with that acquired by Kinect Azure. We use IV-PSNR to evaluate their depth maps-based virtual views for the objective comparison. The quality metric with Kinect Azure regularly stay around 32 dB, and its active depth maps yields view synthesis results with better subjective performance in low-textured areas than DERS. Hence, we observe a worthy trade-off in depth performance between Kinect Azure and DERS, but with an advantage of negligible computational cost from the former. We recommend the Kinect Azure for real-time DIBR applications.