Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology最新文献

筛选
英文 中文
Concept for using eye tracking in a head-mounted display to adapt rendering to the user's current visual field 在头戴式显示器中使用眼动追踪的概念,以适应用户当前的视野
Daniel Pohl, Xucong Zhang, A. Bulling, O. Grau
{"title":"Concept for using eye tracking in a head-mounted display to adapt rendering to the user's current visual field","authors":"Daniel Pohl, Xucong Zhang, A. Bulling, O. Grau","doi":"10.1145/2993369.2996300","DOIUrl":"https://doi.org/10.1145/2993369.2996300","url":null,"abstract":"With increasing spatial and temporal resolution in head-mounted displays (HMDs), using eye trackers to adapt rendering to the user is getting important to handle the rendering workload. Besides using methods like foveated rendering, we propose to use the current visual field for rendering, depending on the eye gaze. We use two effects for performance optimizations. First, we noticed a lens defect in HMDs, where depending on the distance of the eye gaze to the center, certain parts of the screen towards the edges are not visible anymore. Second, if the user looks up, he cannot see the lower parts of the screen anymore. For the invisible areas, we propose to skip rendering and to reuse the pixels colors from the previous frame. We provide a calibration routine to measure these two effects. We apply the current visual field to a renderer and get up to 2x speed-ups.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123814686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Temporal antialiasing for head mounted displays in virtual reality 虚拟现实中头戴式显示器的时间抗锯齿
Jung-Bum Kim, Soo-Ryum Choi, Joon-Hyun Choi, Sang-Jun Ahn, Chanmin Park
{"title":"Temporal antialiasing for head mounted displays in virtual reality","authors":"Jung-Bum Kim, Soo-Ryum Choi, Joon-Hyun Choi, Sang-Jun Ahn, Chanmin Park","doi":"10.1145/2993369.2996298","DOIUrl":"https://doi.org/10.1145/2993369.2996298","url":null,"abstract":"This paper identifies a new temporal aliasing problem caused by unintended head movement by users with VR HMDs. The images that users see slightly change even in the case that the users intend to hold and concentrate on a certain part of VR content. The slight change is more perceivable, because the images are magnified by lenses of VR HMDs. We propose the head movement based temporal antialiasing approach which blends colors that users see in the middle of head movement. In our approach, the way to determine locations and weights of colors to be blended is based on head movement and time stamp. Speed of head movement also determines proportions of colors in the past and at present in blending. Our approach is effective to reduce the temporal aliasing caused by unintended head movement.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131556288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view gesture annotations in image-based 3D reconstructed scenes 基于图像的三维重构场景中的多视图手势标注
B. Nuernberger, Kuo-Chin Lien, Lennon Grinta, Chris Sweeney, M. Turk, Tobias Höllerer
{"title":"Multi-view gesture annotations in image-based 3D reconstructed scenes","authors":"B. Nuernberger, Kuo-Chin Lien, Lennon Grinta, Chris Sweeney, M. Turk, Tobias Höllerer","doi":"10.1145/2993369.2993371","DOIUrl":"https://doi.org/10.1145/2993369.2993371","url":null,"abstract":"We present a novel 2D gesture annotation method for use in image-based 3D reconstructed scenes with applications in collaborative virtual and augmented reality. Image-based reconstructions allow users to virtually explore a remote environment using image-based rendering techniques. To collaborate with other users, either synchronously or asynchronously, simple 2D gesture annotations can be used to convey spatial information to another user. Unfortunately, prior methods are either unable to disambiguate such 2D annotations in 3D from novel viewpoints or require relatively dense reconstructions of the environment. In this paper, we propose a simple multi-view annotation method that is useful in a variety of scenarios and applicable to both very sparse and dense 3D reconstructions. Specifically, we employ interactive disambiguation of the 2D gestures via a second annotation drawn from another viewpoint, triangulating two drawings to achieve a 3D result. Our method automatically chooses an appropriate second viewpoint and uses image-based rendering transitions to keep the user oriented while moving to the second viewpoint. User experiments in an asynchronous collaboration scenario demonstrate the usability of the method and its superiority over a baseline method. In addition, we showcase our method running on a variety of image-based reconstruction datasets and highlight its use in a synchronous local-remote user collaboration system.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128251243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A VR serious game for fire evacuation drill with synchronized tele-collaboration among users 消防疏散演练虚拟现实严肃游戏,用户间同步远程协作
Gyutae Ha, Hojun Lee, Sangho Lee, J. Cha, Shiho Kim
{"title":"A VR serious game for fire evacuation drill with synchronized tele-collaboration among users","authors":"Gyutae Ha, Hojun Lee, Sangho Lee, J. Cha, Shiho Kim","doi":"10.1145/2993369.2996306","DOIUrl":"https://doi.org/10.1145/2993369.2996306","url":null,"abstract":"Immersive VR, a technology getting attention in recent years, is widely applied to the realm of serious games because it can provide users with both fun and intriguing experiences. This poster proposes a self-training VR serious game for fire evacuation drill with concurrent tele-collaboration among avatars controlled by and synchronized with multiple-users in remote places. We introduce a system architecture of both single user and its extension to multiple-user system. The single user system consists of wearable sensors and 3D VR HMD to synchronize a user's motions to one's own avatar in the virtual environment. We can easily extend the system to the multi-user mode through the Unity game cloud server. The multi-user mode enable players to experience a tele-existence so that they can collaborate in the virtual environment, and they can concurrently navigate while interacting with virtual objects as if they coexist in the same space.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124632454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Take-over control paradigms in collaborative virtual environments for training 协作虚拟培训环境中的接管控制范式
Gwendal Le Moulec, F. Argelaguet, A. Lécuyer, V. Gouranton
{"title":"Take-over control paradigms in collaborative virtual environments for training","authors":"Gwendal Le Moulec, F. Argelaguet, A. Lécuyer, V. Gouranton","doi":"10.1145/2993369.2993410","DOIUrl":"https://doi.org/10.1145/2993369.2993410","url":null,"abstract":"The main objective of this paper is to study and formalize the Take-Over Control in Collaborative Virtual Environments for Training (CVET). The Take-Over Control represents the transfer (the take over) of the interaction control of an object between two or more users. This paradigm is particularly useful for training scenarios, in which the interaction control could be continuously exchanged between the trainee and the trainer, e.g. the latter guiding and correcting the trainee's actions. The paper presents the formalization of the Take-Over Control followed by an illustration focusing in a use-case of collaborative maritime navigation. In the presented use-case, the trainee has to avoid an under-water obstacle with the help of a trainer who has additional information about the obstacle. The use-case allows to highlight the different elements a Take-Over Control situation should enforce, such as user's awareness. Different Take-Over Control techniques were provided and evaluated focusing on the transfer exchange mechanism and the visual feedback. The results show that participants preferred the Take-Over Control technique which maximized the user awareness.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130256462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Shop 'til you hear it drop: influence of interactive auditory feedback in a virtual reality supermarket 购物直到你听到它掉下来:虚拟现实超市中交互式听觉反馈的影响
Erik Sikström, E. R. Høeg, L. Mangano, N. C. Nilsson, Amalia de Götzen, S. Serafin
{"title":"Shop 'til you hear it drop: influence of interactive auditory feedback in a virtual reality supermarket","authors":"Erik Sikström, E. R. Høeg, L. Mangano, N. C. Nilsson, Amalia de Götzen, S. Serafin","doi":"10.1145/2993369.2996343","DOIUrl":"https://doi.org/10.1145/2993369.2996343","url":null,"abstract":"In this paper we describe an experiment aiming to investigate the impact of auditory feedback in a virtual reality supermarket scenario. The participants were asked to read a shopping list and collect items one by one and place them into a shopping cart. Three conditions were presented randomly, where audio feedback was (1) absent, (2) had impact sounds for collisions including when grasping, (3) had impact sounds as well as continuous sounds when moving the products. The subjects experience of the simulation during the three experimental conditions were studied using a questionnaire where ratings on presence, body ownership, awareness of own movements, usability and enjoyment were collected. The results are presented and discussed.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116744568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Accelerated viewpoint panning with rotational gain in 360 degree videos 加速视点平移与旋转增益在360度视频
Seokjun Hong, G. Kim
{"title":"Accelerated viewpoint panning with rotational gain in 360 degree videos","authors":"Seokjun Hong, G. Kim","doi":"10.1145/2993369.2996309","DOIUrl":"https://doi.org/10.1145/2993369.2996309","url":null,"abstract":"In this paper, we present an application of rotational gain to horizontal panning for viewing 360 degree videos. Rotational gain refers to the ratio between the rotation velocity in the virtual (video) space to that of the physical, and it allows the user to rotate one's head less quickly than actually needed (without the user noticing such adjustment to some degree). As such it can bring about convenience, and less physical movement, fatigue and possibly even sickness. We implemented a 360 video panning system with both a constant and dynamic gain, and compared user behavior and subjective usability. Our pilot study showed promising results in that with the proper gain value and control scheme, the user will unknowingly use less physical movement than needed, yet maintain reasonable spatial understanding with higher usability.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131797833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Exploring floating stereoscopic driver-car interfaces with wide field-of-view in a mixed reality simulation 在混合现实模拟中探索具有宽视场的浮动立体驾驶员-汽车界面
Patrick Lindemann, G. Rigoll
{"title":"Exploring floating stereoscopic driver-car interfaces with wide field-of-view in a mixed reality simulation","authors":"Patrick Lindemann, G. Rigoll","doi":"10.1145/2993369.2996299","DOIUrl":"https://doi.org/10.1145/2993369.2996299","url":null,"abstract":"In this paper, we propose a floating, multi-layered, wide field-of-view user interface for car drivers. It utilizes stereoscopic depth and focus blurring to highlight items with high priority or urgency. Individual layers are additionally used to separate groups of UI elements according to importance or context. Our work is motivated by two main prospects: a fundamentally changing driver-car interaction and ongoing technology advancements for mixed reality devices. A working prototype has been implemented as part of a custom driving simulation and will be further extended. We plan evaluations in contexts ranging from manual to fully automated driving, providing context-specific suggestions. We want to determine user preferences for layout and prioritization of the UI elements, perceived quality of the interface and effects on driving performance.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134051754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A compact, wide-FOV optical design for head-mounted displays 一种用于头戴式显示器的紧凑、宽视场光学设计
I. Rakkolainen, M. Turk, Tobias Höllerer
{"title":"A compact, wide-FOV optical design for head-mounted displays","authors":"I. Rakkolainen, M. Turk, Tobias Höllerer","doi":"10.1145/2993369.2996322","DOIUrl":"https://doi.org/10.1145/2993369.2996322","url":null,"abstract":"We present a new optical design for head-mounted displays (HMD) which has an exceptionally wide field of view (FOV). It can cover even the full human FOV. It is based on seamless lenses and screens curved around the eyes. The proof-of-concept prototypes are promising, and one of them far exceeds the human FOV, although the effective FOV is limited by the anatomy of the human head. The presented optical design has advantages such as compactness, light weight, low cost and super-wide FOV with high resolution. Even though this is still work-in-progress and display functionality is not yet implemented, it suggests a feasible way to significantly expand the FOV of HMDs.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130400557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Head turn scaling below the threshold of perception in immersive virtual environments 沉浸式虚拟环境中头部转动低于感知阈值
Martin Westhoven, D. Paul, T. Alexander
{"title":"Head turn scaling below the threshold of perception in immersive virtual environments","authors":"Martin Westhoven, D. Paul, T. Alexander","doi":"10.1145/2993369.2993385","DOIUrl":"https://doi.org/10.1145/2993369.2993385","url":null,"abstract":"Immersive virtual environments allow to experience presence, the feeling of being present in a virtual environment. When accessing virtual reality with virtual reality goggles, head tracking is used to update the virtual viewpoint according to the user's head movement. While typically used unmodified, the extent to which the virtual viewpoint follows the real head motion can be scaled. In this paper, the effect of scaling below the threshold of perception on presence during a target acquisition task was studied. It was assumed, that presence is reduced when head motion is scaled. No effect on presence, simulator sickness and performance was found. A significant effect on physical task load was found. The results yield information for further work and for the required verification of the used concept of presence. It can be assumed, that load can be modified by the scaling without significantly influencing the quality of presence.","PeriodicalId":396801,"journal":{"name":"Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129905331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信