Performance analysis of DIBR-based view synthesis with kinect azure

Yupeng Xie, André Souto, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, G. Lafruit
{"title":"Performance analysis of DIBR-based view synthesis with kinect azure","authors":"Yupeng Xie, André Souto, Sarah Fachada, Daniele Bonatto, Mehrdad Teratani, G. Lafruit","doi":"10.1109/IC3D53758.2021.9687195","DOIUrl":null,"url":null,"abstract":"DIBR (Depth Image Based Rendering) can synthesize Free Navigation virtual views with sparse multiview texture images and corresponding depth maps. There are two ways to obtain depth maps: through software or depth sensors, which is a trade-off between precision versus speed (computational cost and processing time). This article compares the performance of depth maps estimated by MPEG-I’s Depth Estimation Reference Software with that acquired by Kinect Azure. We use IV-PSNR to evaluate their depth maps-based virtual views for the objective comparison. The quality metric with Kinect Azure regularly stay around 32 dB, and its active depth maps yields view synthesis results with better subjective performance in low-textured areas than DERS. Hence, we observe a worthy trade-off in depth performance between Kinect Azure and DERS, but with an advantage of negligible computational cost from the former. We recommend the Kinect Azure for real-time DIBR applications.","PeriodicalId":382937,"journal":{"name":"2021 International Conference on 3D Immersion (IC3D)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on 3D Immersion (IC3D)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3D53758.2021.9687195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

DIBR (Depth Image Based Rendering) can synthesize Free Navigation virtual views with sparse multiview texture images and corresponding depth maps. There are two ways to obtain depth maps: through software or depth sensors, which is a trade-off between precision versus speed (computational cost and processing time). This article compares the performance of depth maps estimated by MPEG-I’s Depth Estimation Reference Software with that acquired by Kinect Azure. We use IV-PSNR to evaluate their depth maps-based virtual views for the objective comparison. The quality metric with Kinect Azure regularly stay around 32 dB, and its active depth maps yields view synthesis results with better subjective performance in low-textured areas than DERS. Hence, we observe a worthy trade-off in depth performance between Kinect Azure and DERS, but with an advantage of negligible computational cost from the former. We recommend the Kinect Azure for real-time DIBR applications.
kinect azure中基于dibr的视图合成性能分析
DIBR (Depth Image Based Rendering)是一种基于稀疏多视图纹理图像和相应深度图的自由导航虚拟视图合成技术。获得深度图有两种方法:通过软件或深度传感器,这是精度与速度(计算成本和处理时间)之间的权衡。本文比较了MPEG-I深度估计参考软件和Kinect Azure获得的深度图的性能。我们使用IV-PSNR来评估他们基于深度图的虚拟视图,以进行客观比较。Kinect Azure的质量指标通常保持在32db左右,其主动深度图在低纹理区域产生的视图合成结果比DERS具有更好的主观性能。因此,我们观察到Kinect Azure和DERS之间在深度性能上的值得权衡,但前者的计算成本可以忽略不计。我们推荐Kinect Azure用于实时DIBR应用程序。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信