Run Yu, Zachary Duer, J. T. Ogle, D. Bowman, Thomas W. Tucker, D. Hicks, Dongsoo Choi, Zach Bush, Huy Ngo, Phat Nguyen, Xindi Liu
{"title":"Experiencing an Invisible World War I Battlefield Through Narrative-Driven Redirected Walking in Virtual Reality","authors":"Run Yu, Zachary Duer, J. T. Ogle, D. Bowman, Thomas W. Tucker, D. Hicks, Dongsoo Choi, Zach Bush, Huy Ngo, Phat Nguyen, Xindi Liu","doi":"10.1109/VR.2018.8448288","DOIUrl":"https://doi.org/10.1109/VR.2018.8448288","url":null,"abstract":"Redirected walking techniques have the potential to provide natural locomotion while users experience large virtual environments. However, when using redirected walking in small physical workspaces, disruptive overt resets are often required. We describe the design of an educational virtual reality experience in which users physically walk through virtual tunnels representative of the World War I battle of Vauquois. Walking in only a 15- by 5-foot tracked space, users are redirected through subtle, narrative-driven resets to walk through a tunnel nearly 50 feet in length. This work contributes approaches and lessons that can be used to provide a seamless and natural virtual reality walking experience in highly constrained physical spaces.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134363942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance-Driven Dance Motion Control of a Virtual Partner Character","authors":"Christos Mousas","doi":"10.1109/VR.2018.8446498","DOIUrl":"https://doi.org/10.1109/VR.2018.8446498","url":null,"abstract":"Taking advantage of motion capture and display technologies, a method giving a user the ability to control the dance motions of a virtual partner in an immersive setup was developed and is presented in this paper. The method utilizes a dance motion dataset containing the motion of both dancers (leader and partner). A hidden Markov model (HMM) was used to learn the structure of the dance motions. The HMM was trained on the motion of a chosen dancer (leader or partner), and during runtime, the system predicts the progress of the chosen dance motion, which corresponds to the progress of the user's motion. The regular structure of the HMM was extended by utilizing a jump state transition, allowing the user to improvise dance motions during the runtime. Since the jump state addition increases the model's complexity, an effort was made to optimize the prediction process to ensure runtime efficiency. A few corrective steps were also implemented to ensure the partner character's motions appear natural. A user study was conducted to understand the naturalness of the synthesized motion as well as the control that the user has on the partner character's synthesized motion.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122980457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HangerOVER: Development of HMO-Embedded Haptic Display Using the Hanger Reflex and VR Application","authors":"Yuki Kon, Takuto Nakamura, R. Sakuragi, Hirotaka Shlonolrl, Vibol Yem, Hiroyuki Kajirnoto","doi":"10.1109/VR.2018.8446524","DOIUrl":"https://doi.org/10.1109/VR.2018.8446524","url":null,"abstract":"The Hanger Reflex is a phenomenon in which the head rotates unintentionally when it is sandwiched by a wire hanger. The reflex is effectively generated by pressing on specific points, and can be reproduced by pressing with an actuator. We propose the HangerOVER, an HMD-embedded haptic display that can provide both force and motion senses using the Hanger Reflex. In this paper, we designed HangerOVER that HMD-embedded force and motion display using the Hanger Reflex, and developed four VR applications for demonstration.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123085903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reducing VR Sickness Through Peripheral Visual Effects","authors":"Helmut Buhler, Sebastian Misztal, Jonas Schild","doi":"10.1109/VR.2018.8446346","DOIUrl":"https://doi.org/10.1109/VR.2018.8446346","url":null,"abstract":"This paper proposes and evaluates two novel visual effects that can be applied to Virtual Reality (VR) applications to reduce VR sickness with head-mounted displays (HMD). Unlike other techniques that pursue the same goal, our approach allows a user to move continuously through a virtual environment without reducing the perceived field of view (FOV). A within-design study with 18 users compares reported sickness between the two effects and baseline. The results show lower means of sickness in the two novel effects; however, the difference is not statistically significant across all users, replicating large variety in individual reactions found in previous studies. In summary, reducing optical flow in peripheral vision is a promising approach. Future potential lies in adjusting visual effect parameters to maximize impact for large user groups.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121411985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object Size Perception in Immersive Virtual Reality: Avatar Realism Affects the Way We Perceive","authors":"Nami Ogawa, Takuji Narumi, M. Hirose","doi":"10.1109/VR.2018.8446318","DOIUrl":"https://doi.org/10.1109/VR.2018.8446318","url":null,"abstract":"How does the representation of an embodied avatar influence the way in which a human perceives the scale of a virtual environment? It has been shown that the scale of the external environment is perceived relative to the size of one's body. However, the influence of avatar realism on the perceived scale has not been investigated, despite the fact that it is common to embody avatars of various representations, from iconic to realistic. This study examined how avatar realism would affect perceived graspable object sizes as the size of the avatar hand changes. In the experiment, we manipulated the realism (high, medium, and low) and size (veridical and enlarged) of the avatar hand, and measured the perceived size of a cube. The results showed that the size of the cube was perceived to be smaller when the avatar hand was enlarged for all degrees of realism of the hand. However, the enlargement of the avatar hand had a greater influence on the perceived cube size for the highly realistic avatar than for the medium-level and low-level realism conditions. This study shed new light on the importance of the avatar representation in a three-dimensional user interface field, in how it can affect the manner in which we perceive the scale of a virtual environment.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125576284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Study of Cybersickness and Sensory Conflict Theory Using a Motion-Coupled Virtual Reality System","authors":"A. K. T. Ng, L. Chan, H. Lau","doi":"10.1109/VR.2018.8446269","DOIUrl":"https://doi.org/10.1109/VR.2018.8446269","url":null,"abstract":"Sensory conflict theory attempts to provide the framework of cyber-sickness in virtual reality (VR) systems by the mismatch between visual and vestibular senses. This study examined whether coupling motion sensations to the visual stimulus in a VR setting could reduce the discomfort. A motion-coupled VR system was used. Motion platform provides motion that supplements visual stimulus from the head-mounted display. Participants experience programmed visual and motion yaw rotations while viewing a virtual apartment. Three conditions were tested on how motion and visual stimuli synchronise which each other: purely visual, motion synchronised with visual, and visually-levelled frame of reference. Results showed that providing matching visual-motion stimuli decreased the miserable score (MISC) of cybersickness and increased the joyfulness score (JOSC) of their subjective feeling.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125835915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating VR Live Videos with Tripod Panoramic Rig","authors":"F. Xu, Tianqi Zhao, Bicheng Luo, Qionghai Dai","doi":"10.1109/VR.2018.8448283","DOIUrl":"https://doi.org/10.1109/VR.2018.8448283","url":null,"abstract":"Recent breakthrough in consumer-level virtual reality (VR) devices brings an increasing demand of VR live content. As converting real life content into VR need complex computations, current techniques can not synthesize 360° 3D VR content with high performance, not to mention real time. We propose an end-to-end system that records a scene using a tripod panoramic rig and broadcasts 360° stereo panorama videos in real time. The system performs a panorama stitching technique which pre-compute 3 stitching seam candidates for dynamic seam switching in the live broadcasting. This technique achieves high frame rates (>30fps) with minimum foreground cutoff and temporal jittering artifacts. Stereo vision quality is also better preserved by a proposed weighting-based image alignment scheme. We demonstrate the effectiveness of our approach on a variety of videos delivering live events. And our system has been successfully used in broadcasting live shows to mobile phone users on a professional live broadcasting platform with about 390 million user visits per month.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"382 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131857475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Koumaditis, Francesco Chinello, Sarune Venckute
{"title":"Design of a Virtual Reality and Haptic Setup Linking Arousals to Training Scenarios: A Preliminary Stage","authors":"Konstantinos Koumaditis, Francesco Chinello, Sarune Venckute","doi":"10.1109/VR.2018.8446528","DOIUrl":"https://doi.org/10.1109/VR.2018.8446528","url":null,"abstract":"Using Virtual Reality (VR) to realise immersive training environments is not a new concept. However, investigating arousal in immersive environments is. By arousal, we denote a general physical and psychological activity that in the form of anxiety and stress for example, can affect trainees' performance. In this work, we describe the setup design for a two-phase explorative experiment linking arousal and performance, during training in a Virtual Reality (VR) environment. To do so we are using an appraised well-crafted VR puzzle game, questionnaires (i.e. NASA Task Load Index [3]), and sensors (skin conductance response / pulse). The experiment will involve participants from the public that will be trained in two predefined processes of variant difficulty.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131876798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Software Based Visual Aberration Correction for HMDs","authors":"F. Xu, Dayang Li","doi":"10.1109/VR.2018.8447557","DOIUrl":"https://doi.org/10.1109/VR.2018.8447557","url":null,"abstract":"When using current head-mounted displays (HMDs), users with optical aberrations need to wear the equipment on the top of their own glasses. As both the HMDs and the glasses require to be tightly attached to faces, wearing them together is very inconvenient and uncomfortable, and thus degrades user experiences heavily. In this paper, we propose a real-time image pre-correction technique to correct the aberrations purely by software. Users can take off their own glasses and enjoy the virtual reality (VR) experience through an ordinary HMD freely and comfortably. Furthermore, as our technique is not related to hardware, it is compatible with all the current commercial HMDs. Our technique is based on the observation that the refractive errors majorly cause the ideal retinal image to be convolved by certain kernels. So we pre-correct the image on the display according to the specific aberrations of a user, aiming to maximize the similarity between the convolved retinal image and the ideal image. To achieve real-time performance, we modify the energy function to have linear solutions and implement the optimization fully on GPU. The experiments and the user study indicate that without any changes on hardware, we generate better viewing experience of HMDs for users with optical aberrations.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120857633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yining Lang, Liang Wei, Fang Xu, Yibiao Zhao, L. Yu
{"title":"Synthesizing Personalized Training Programs for Improving Driving Habits via Virtual Reality","authors":"Yining Lang, Liang Wei, Fang Xu, Yibiao Zhao, L. Yu","doi":"10.1109/VR.2018.8448290","DOIUrl":"https://doi.org/10.1109/VR.2018.8448290","url":null,"abstract":"The recent popularity of consumer-grade virtual reality devices, such as Oculus Rift, HTC Vive, and Fove virtual reality headset, has enabled household users to experience highly immersive virtual environments. We take advantage of the commercial availability of these devices to provide a novel virtual reality-based driving training approach designed to help individuals improve their driving habits in common scenarios. Our approach first identifies improper driving habits of a user when he drives in a virtual city. Then it synthesizes a pertinent training program to help improve the users driving skills based on the discovered improper habits of the user. To apply our approach, a user first goes through a pre-evaluation test from which his driving habits are analyzed. The analysis results are used to drive optimization for synthesizing a training program. This training program is a personalized route which includes different traffic events. When the user drives along this route via a driving controller and an eye-tracking virtual reality headset, the traffic events he encounters will help him to improve his driving habits. To validate the effectiveness of our approach, we conducted a user study to compare our virtual reality-based driving training with other training methods. The user study results show that the participants trained by our approach perform better on average than those trained by other methods in terms of evaluation score and response time and their improvement is more persistent.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130902853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}