Displacement Error Analysis of 6-DoF Virtual Reality

Ridvan Aksu, Jacob Chakareski, V. Velisavljevic
{"title":"Displacement Error Analysis of 6-DoF Virtual Reality","authors":"Ridvan Aksu, Jacob Chakareski, V. Velisavljevic","doi":"10.1145/3349801.3349812","DOIUrl":null,"url":null,"abstract":"Virtual view synthesis is a critical step in enabling Six-Degrees of Freedom (DoF) immersion experiences in Virtual Reality (VR). It comprises synthesis of virtual viewpoints for a user navigating the immersion environment, based on a small subset of captured viewpoints featuring texture and depth maps. We investigate the extreme values of the displacement error in view synthesis caused by depth map quantization, for a given 6DoF VR video dataset, particularly based on the camera settings, scene properties, and the depth map quantization error. We establish a linear relationship between the displacement error and the quantization error, scaled by the sine of the angle between the location of the object and the virtual view in the 3D scene, formed at the reference camera location. In the majority of cases the horizontal and vertical displacement errors induced at a pixel location of a reconstructed 360° viewpoint comprising the immersion environment are respectively proportional to 3/5 and 1/5 of the respective quantization error. Also, the distance between the reference view and the synthesized view severely increases the displacement error. Following these observations: displacement error values can be predicted for given pixel coordinates and quantization error, and this can serve as a first step towards modeling the relationship between the encoding rate of reference views and the quality of synthesized views.","PeriodicalId":299138,"journal":{"name":"Proceedings of the 13th International Conference on Distributed Smart Cameras","volume":"94 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 13th International Conference on Distributed Smart Cameras","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3349801.3349812","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Virtual view synthesis is a critical step in enabling Six-Degrees of Freedom (DoF) immersion experiences in Virtual Reality (VR). It comprises synthesis of virtual viewpoints for a user navigating the immersion environment, based on a small subset of captured viewpoints featuring texture and depth maps. We investigate the extreme values of the displacement error in view synthesis caused by depth map quantization, for a given 6DoF VR video dataset, particularly based on the camera settings, scene properties, and the depth map quantization error. We establish a linear relationship between the displacement error and the quantization error, scaled by the sine of the angle between the location of the object and the virtual view in the 3D scene, formed at the reference camera location. In the majority of cases the horizontal and vertical displacement errors induced at a pixel location of a reconstructed 360° viewpoint comprising the immersion environment are respectively proportional to 3/5 and 1/5 of the respective quantization error. Also, the distance between the reference view and the synthesized view severely increases the displacement error. Following these observations: displacement error values can be predicted for given pixel coordinates and quantization error, and this can serve as a first step towards modeling the relationship between the encoding rate of reference views and the quality of synthesized views.
六自由度虚拟现实的位移误差分析
虚拟视图合成是实现虚拟现实(VR)中六自由度(DoF)沉浸体验的关键步骤。它包括虚拟视点的合成,供用户在沉浸式环境中导航,基于一小部分捕获的视点,具有纹理和深度图。对于给定的6DoF VR视频数据集,我们研究了深度图量化引起的视图合成位移误差的极值,特别是基于相机设置,场景属性和深度图量化误差。我们建立了位移误差和量化误差之间的线性关系,用三维场景中物体位置与虚拟视图之间的夹角的正弦来缩放,在参考相机位置形成。在大多数情况下,在包含沉浸环境的重建360°视点的像素位置诱导的水平和垂直位移误差分别与各自量化误差的3/5和1/5成正比。此外,参考视图与合成视图之间的距离严重增加了位移误差。根据这些观察结果:可以预测给定像素坐标和量化误差的位移误差值,这可以作为建模参考视图编码率与合成视图质量之间关系的第一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信