Hybrid Mono-Stereo Rendering in Virtual Reality

Laura Fink, Nora Hensel, Daniela Markov-Vetter, C. Weber, O. Staadt, Marc Starnrninqer
{"title":"Hybrid Mono-Stereo Rendering in Virtual Reality","authors":"Laura Fink, Nora Hensel, Daniela Markov-Vetter, C. Weber, O. Staadt, Marc Starnrninqer","doi":"10.1109/VR.2019.8798283","DOIUrl":null,"url":null,"abstract":"Rendering for Head Mounted Displays (HMD) causes a doubled computational effort, since serving the human stereopsis requires the creation of one image for the left and one for the right eye. The difference in this image pair, called binocular disparity, is an important cue for depth perception and the spatial arrangement of surrounding objects. Findings in the context of the human visual system (HVS) have shown that especially in the near range of an observer, binocular disparities have a high significance. But as with rising distance the disparity converges to a simple geometric shift, also the importance as depth cue exponentially declines. In this paper, we exploit this knowledge about the human perception by rendering objects fully stereoscopic only up to a chosen distance and monoscopic, from there on. By doing so, we obtain three distinct images which are synthesized to a new hybrid stereoscopic image pair, which reasonably approximates a conventionally rendered stereoscopic image pair. The method has the potential to reduce the amount of rendered primitives easily to nearly 50 % and thus, significantly lower frame times. Besides of a detailed analysis of the introduced formal error and how to deal with occurring artifacts, we evaluated the perceived quality of the VR experience during a comprehensive user study with nearly 50 participants. The results show that the perceived difference in quality between the shown image pairs was generally small. An in-depth analysis is given on how the participants reached their decisions and how they subjectively rated their VR experience.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VR.2019.8798283","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Rendering for Head Mounted Displays (HMD) causes a doubled computational effort, since serving the human stereopsis requires the creation of one image for the left and one for the right eye. The difference in this image pair, called binocular disparity, is an important cue for depth perception and the spatial arrangement of surrounding objects. Findings in the context of the human visual system (HVS) have shown that especially in the near range of an observer, binocular disparities have a high significance. But as with rising distance the disparity converges to a simple geometric shift, also the importance as depth cue exponentially declines. In this paper, we exploit this knowledge about the human perception by rendering objects fully stereoscopic only up to a chosen distance and monoscopic, from there on. By doing so, we obtain three distinct images which are synthesized to a new hybrid stereoscopic image pair, which reasonably approximates a conventionally rendered stereoscopic image pair. The method has the potential to reduce the amount of rendered primitives easily to nearly 50 % and thus, significantly lower frame times. Besides of a detailed analysis of the introduced formal error and how to deal with occurring artifacts, we evaluated the perceived quality of the VR experience during a comprehensive user study with nearly 50 participants. The results show that the perceived difference in quality between the shown image pairs was generally small. An in-depth analysis is given on how the participants reached their decisions and how they subjectively rated their VR experience.
虚拟现实中的混合单声立体渲染
头戴式显示器(HMD)的渲染会导致双倍的计算工作量,因为为人类立体视觉服务需要为左眼和右眼创建一个图像。这对图像的差异被称为双目视差,是深度感知和周围物体空间排列的重要线索。在人类视觉系统(HVS)的背景下,研究结果表明,特别是在近距离观察者,双眼差异具有很高的意义。但随着距离的增加,这种差异收敛为简单的几何变化,深度线索的重要性也呈指数级下降。在本文中,我们利用这些关于人类感知的知识,将物体渲染成完全立体的,只到一个选定的距离和单视角,从那里开始。通过这样做,我们得到了三个不同的图像,这些图像被合成为一个新的混合立体图像对,它合理地近似于传统渲染的立体图像对。该方法有可能很容易地将渲染原语的数量减少到近50%,从而显著降低帧时间。除了对引入的形式错误和如何处理发生的伪影进行详细分析之外,我们还在对近50名参与者进行的全面用户研究中评估了VR体验的感知质量。结果表明,所显示的图像对之间的感知质量差异通常很小。深入分析了参与者如何达成他们的决定,以及他们如何主观地评价他们的VR体验。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信