{"title":"Perception-Driven Hybrid Foveated Depth of Field Rendering for Head-Mounted Displays","authors":"Jingyu Liu, Claire Mantel, Søren Forchhammer","doi":"10.1109/ismar52148.2021.00014","DOIUrl":null,"url":null,"abstract":"In this paper, we present a novel perception-driven hybrid rendering method leveraging the limitation of the human visual system (HVS). Features accounted in our model include: foveation from the visual acuity eccentricity (VAE), depth of field (DOF) from vergence & accommodation, and longitudinal chromatic aberration (LCA) from color vision. To allocate computational workload efficiently, first we apply a gaze-contingent geometry simplification. Then we convert the coordinates from screen space to polar space with a scaling strategy coherent with VAE. Upon that, we apply a stochastic sampling based on DOF. Finally, we post-process the Bokeh for DOF, which can at the same time achieve LCA and anti-aliasing. A virtual reality (VR) experiment on 6 Unity scenes with a head-mounted display (HMD) HTC VIVE Pro Eye yields frame rates range from 25.2 to 48.7 fps. Objective evaluation with FovVideoVDP - a perceptual based visible difference metric - suggests that the proposed method gives satisfactory just-objectionable-difference (JOD) scores across 6 scenes from 7.61 to 8.69 (in a 10 unit scheme). Our method achieves better performance compared with the existing methods while having the same or better level of quality scores.","PeriodicalId":395413,"journal":{"name":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ismar52148.2021.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper, we present a novel perception-driven hybrid rendering method leveraging the limitation of the human visual system (HVS). Features accounted in our model include: foveation from the visual acuity eccentricity (VAE), depth of field (DOF) from vergence & accommodation, and longitudinal chromatic aberration (LCA) from color vision. To allocate computational workload efficiently, first we apply a gaze-contingent geometry simplification. Then we convert the coordinates from screen space to polar space with a scaling strategy coherent with VAE. Upon that, we apply a stochastic sampling based on DOF. Finally, we post-process the Bokeh for DOF, which can at the same time achieve LCA and anti-aliasing. A virtual reality (VR) experiment on 6 Unity scenes with a head-mounted display (HMD) HTC VIVE Pro Eye yields frame rates range from 25.2 to 48.7 fps. Objective evaluation with FovVideoVDP - a perceptual based visible difference metric - suggests that the proposed method gives satisfactory just-objectionable-difference (JOD) scores across 6 scenes from 7.61 to 8.69 (in a 10 unit scheme). Our method achieves better performance compared with the existing methods while having the same or better level of quality scores.
本文提出了一种利用人类视觉系统(HVS)局限性的感知驱动混合渲染方法。在我们的模型中考虑的特征包括:视觉灵敏度偏心(VAE)的注视点,聚光和调节的景深(DOF),以及色差(LCA)的纵向色差。为了有效地分配计算工作量,首先,我们应用了一种基于注视的几何简化。然后用与VAE相一致的缩放策略将坐标从屏幕空间转换到极坐标空间。在此基础上,采用了基于自由度的随机抽样方法。最后,对散景进行DOF后处理,同时实现LCA和抗混叠。在头戴式显示器HTC VIVE Pro Eye上对6个Unity场景进行的虚拟现实(VR)实验显示,帧率从25.2到48.7 fps不等。使用FovVideoVDP(一种基于感知的可见差异度量)进行客观评估表明,所提出的方法在6个场景(10个单元方案)中给出了令人满意的刚好异议差异(JOD)分数,范围从7.61到8.69。与现有方法相比,我们的方法在质量分数相同或更高的情况下取得了更好的性能。