Jonas Schild, Sebastian Misztal, Benjamin Roth, Leonard Flock, Thomas Luiz, Dieter Lerner, M. Herkersdorf, Konstantin Wegner, Markus Neuberaer, Andreas Franke, C. Kemp, Johannes Pranqhofer, Sven Seele, Helmut Buhler, R. Herpers
{"title":"Applying Multi-User Virtual Reality to Collaborative Medical Training","authors":"Jonas Schild, Sebastian Misztal, Benjamin Roth, Leonard Flock, Thomas Luiz, Dieter Lerner, M. Herkersdorf, Konstantin Wegner, Markus Neuberaer, Andreas Franke, C. Kemp, Johannes Pranqhofer, Sven Seele, Helmut Buhler, R. Herpers","doi":"10.1109/VR.2018.8446160","DOIUrl":"https://doi.org/10.1109/VR.2018.8446160","url":null,"abstract":"We present a multi-user virtual reality (VR) setup that aims at providing novel training tools for paramedics that enhances current learning methods. The hardware setup consists of a two-user full-scale VR environment with head-mounted displays for two interactive trainees and one additional desktop pc for one trainer participant. The software provides a connected multi-user environment, showcasing a paramedic emergency simulation with focus on anaphylactic shock, a representative scenario for critical medical cases that happen too rare to eventually occur within a regular curricular term of vocational training. The prototype offers hands-on experience on multi-user VR in an applied scenario, generating discussion around current state and future development concerning three important research areas: (a) user navigation, (b) interaction, (c) level of visual abstraction, and (d) level of task abstraction.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"6 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120997028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Effectiveness of Head-Mounted Display Virtual Reality (HMD VR) Environment on Students' Learning for a Virtual Collaborative Engineering Assembly Task","authors":"Wen-Hao Huang","doi":"10.1109/VR.2018.8446508","DOIUrl":"https://doi.org/10.1109/VR.2018.8446508","url":null,"abstract":"The emerging VR social networks (e.g., Facebook Spaces, Rec Room) provide opportunities for engineering faculties to design collaborative virtual engineering tasks in their classroom instruction with HMD VR system. However, we do not how this capacity will affect students' learning and their professional skills (e.g., communication and collaboration). The proposed study is expected to fill this research gap and will use a mixed-methods design to explore students' performance and learning outcomes in a virtual collaborative automotive assembly task. The quantitative data will be collected from the pre-and-post task survey and the task itself. This data will be used to analyze the differences among experiment and control groups. Students' responses to the open questions in the post-task survey will serve as triangulation and provide deep insight for the quantitative results. The study is expected to not only contribute to the research field but also benefit different stakeholders in the engineering education systems.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122469557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matt Whitlock, Ethan Harnner, Jed R. Brubaker, Shaun K. Kane, D. Szafir
{"title":"Interacting with Distant Objects in Augmented Reality","authors":"Matt Whitlock, Ethan Harnner, Jed R. Brubaker, Shaun K. Kane, D. Szafir","doi":"10.1109/VR.2018.8446381","DOIUrl":"https://doi.org/10.1109/VR.2018.8446381","url":null,"abstract":"Augmented reality (AR) applications can leverage the full space of an environment to create immersive experiences. However, most empirical studies of interaction in AR focus on interactions with objects close to the user, generally within arms reach. As objects move farther away, the efficacy and usability of different interaction modalities may change. This work explores AR interactions at a distance, measuring how applications may support fluid, efficient, and intuitive interactive experiences in room-scale augmented reality. We conducted an empirical study (N = 20) to measure trade-offs between three interaction modalities-multimodal voice, embodied freehand gesture, and handhelds devices-for selecting, rotating, and translating objects at distances ranging from 8 to 16 feet (2.4m-4.9m). Though participants performed comparably with embodied freehand gestures and handheld remotes, they perceived embodied gestures as significantly more efficient and usable than device-mediated interactions. Our findings offer considerations for designing efficient and intuitive interactions in room-scale AR applications.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"380 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122871830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mayra Donaji Barrera Machuca, Junwei Sun, Duc-Minh Pham, W. Stuerzlinger
{"title":"Fluid VR: Extended Object Associations for Automatic Mode Switching in Virtual Reality","authors":"Mayra Donaji Barrera Machuca, Junwei Sun, Duc-Minh Pham, W. Stuerzlinger","doi":"10.1109/VR.2018.8446437","DOIUrl":"https://doi.org/10.1109/VR.2018.8446437","url":null,"abstract":"Constrained interaction and navigation methods for virtual reality reduce the complexity of the interaction. Yet, with previously presented solutions, users need to learn new interaction tools or remember different actions for changing between different interaction methods. In this paper, we propose Fluid VR, a new 3D user interface for interactive virtual environments that lets users seamlessly transition between navigation and selection. Based on the selected object's properties, Fluid VR applies specific constraints to the interaction or navigation associated with the object. This way users have a better control of their actions, without having to change tools or activate different modes of interaction.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131254282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lara I. Riem, Jacob Van Dehy, Tanya Onushko, S. Beardsley
{"title":"Inducing Compensatory Changes in Gait Similar to External Perturbations Using an Immersive Head Mounted Display","authors":"Lara I. Riem, Jacob Van Dehy, Tanya Onushko, S. Beardsley","doi":"10.1109/VR.2018.8446432","DOIUrl":"https://doi.org/10.1109/VR.2018.8446432","url":null,"abstract":"Understanding the sensorimotor control mechanisms that mediate gait compensation during environmental perturbation is a crucial step in developing tailored rehabilitative therapies to restore ambulation in patient populations. Current methods to evaluate the effects of environmental perturbations involve costly systems that physically perturb patients to elicit a compensatory response. Studies have shown that visual feedback alone can elicit dramatic changes in gait; however, the impact of fully immersive visual feedback is not well studied. Here we examined whether a low cost immersive virtual reality (VR) system can elicit perturbation responses similar to a physical disruption. We examined the responses of 11 subjects as they walked through a VR environment consisting of a bridge spanning a lake. While subjects walked on a treadmill mounted to a 6 degree-of-freedom motion base, pseudorandom roll perturbations (3, 6, 11 deg.) were applied visually to the bridge with (VP trials) and without (V trials) the corresponding physical displacement of the motion base. Significant differences were found between normal (unperturbed) walking and normal walking in the VR environment (p<.05) for average step length, width, and Margin of Stability (MoS). Significant differences were also observed between unperturbed and perturbed walking in the VR environment (p<0.05 for VP and V trials). While the subjects' responses to visual perturbations were generally lower than to combined visual and physical perturbations, the differences were not statistically significant (p>.05). The results demonstrate that visual perturbations provided in an immersive virtual environment can induce compensatory changes in gait during treadmill walking that are consistent with a physical perturbation. The application of environmental perturbations in VR systems could provide a cost-effective approach for gait rehabilitation in patient populations.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131468806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Movement Visualizer for Networked Virtual Reality Platforms","authors":"Omar Shaikh, Yilu Sun, A. S. Won","doi":"10.1109/VR.2018.8446398","DOIUrl":"https://doi.org/10.1109/VR.2018.8446398","url":null,"abstract":"We describe the design, deployment and testing of a module to track and graphically represent user movement in a collaborative virtual environment. This module allows for the comparison of ground-truth user/observer ratings of the affective qualities of an interaction with automatically generated representations of the participants' movements in real time. In this example, we generate three charts visible both to participants and external researchers. Two display the sum of the tracked movements of each participant, and a third displays a “synchrony visualizer”, or a correlation coefficient based on the relationship between the two participants' movements. Users and observers thus see a visual representation of “nonverbal synchrony” as it evolves over the course of the interaction. We discuss this module in the context of other applications beyond synchrony.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127797616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Duotun Wang, James R. Kubricht, Yixin Zhu, Wei Liang, Song-Chun Zhu, Chenfanfu Jiang, Hongjing Lu
{"title":"Spatially Perturbed Collision Sounds Attenuate Perceived Causality in 3D Launching Events","authors":"Duotun Wang, James R. Kubricht, Yixin Zhu, Wei Liang, Song-Chun Zhu, Chenfanfu Jiang, Hongjing Lu","doi":"10.1109/VR.2018.8448287","DOIUrl":"https://doi.org/10.1109/VR.2018.8448287","url":null,"abstract":"When a moving object collides with an object at rest, people immediately perceive a causal event: i.e., the first object has launched the second object forwards. However, when the second object's motion is delayed, or is accompanied by a collision sound, causal impressions attenuate and strengthen. Despite a rich literature on causal perception, researchers have exclusively utilized 2D visual displays to examine the launching effect. It remains unclear whether people are equally sensitive to the spatiotemporal properties of observed collisions in the real world. The present study first examined whether previous findings in causal perception with audiovisual inputs can be extended to immersive 3D virtual environments. We then investigated whether perceived causality is influenced by variations in the spatial position of an auditory collision indicator. We found that people are able to localize sound positions based on auditory inputs in VR environments, and spatial discrepancy between the estimated position of the collision sound and the visually observed impact location attenuates perceived causality.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133272256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Feigl, Christopher Mutschler, M. Philippsen
{"title":"Human Compensation Strategies for Orientation Drifts","authors":"Tobias Feigl, Christopher Mutschler, M. Philippsen","doi":"10.1109/VR.2018.8446300","DOIUrl":"https://doi.org/10.1109/VR.2018.8446300","url":null,"abstract":"No-Pose (NP) tracking systems rely on a single sensor located at the user's head to determine the position of the head. They estimate the head orientation with inertial sensors and analyze the body motion to compensate their drift. However with orientation drift, VR users implicitly lean their heads and bodies sidewards. Hence, to determine the sensor drift and to explicitly adjust the orientation of the VR display there is a need to understand and consider both the user's head and body orientations. This paper studies the effects of head orientation drift around the yaw axis on the user's absolute head and body orientations when walking naturally in the VR. We study how much drift accumulates over time, how a user experiences and tolerates it, and how a user applies strategies to compensate for larger drifts.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133302636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonatan Martínez, Daniel Griffiths, Valerio Biscione, Orestis Georgiou, Thomas Carter
{"title":"Touchless Haptic Feedback for Supernatural VR Experiences","authors":"Jonatan Martínez, Daniel Griffiths, Valerio Biscione, Orestis Georgiou, Thomas Carter","doi":"10.1109/VR.2018.8446522","DOIUrl":"https://doi.org/10.1109/VR.2018.8446522","url":null,"abstract":"Haptics is an important part of the VR space as seen by the plethora of haptic controllers available today. By using a novel ultrasonic haptic device, we developed and integrated mid-air haptic sensations without the need to wear or hold any equipment in a VR game experience. The compelling experience combines visual, audio and haptic stimulation in a supernatural narrative in which the user takes on the role of a wizard apprentice. By using different haptified patterns we could generate a wide range of sensations which mimic supernatural interactions (wizard spells). We detail our methodology and briefly discuss our findings and future work.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133933173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayush Bhargava, Kathryn M. Lucaites, Leah S. Hartman, Hannah M. Solini, Jeffrey W. Bertrand, Andrew C. Robb, C. Pagano, Sabarish V. Babu
{"title":"Towards Revisiting Passability Judgments in Real and Immersive Virtual Environments","authors":"Ayush Bhargava, Kathryn M. Lucaites, Leah S. Hartman, Hannah M. Solini, Jeffrey W. Bertrand, Andrew C. Robb, C. Pagano, Sabarish V. Babu","doi":"10.1109/VR.2018.8446189","DOIUrl":"https://doi.org/10.1109/VR.2018.8446189","url":null,"abstract":"Every task we perform in our day-to-day lives requires us to make judgements about size, distance, depth, etc. The same is true for tasks in an immersive virtual environments (IVE). Increasingly, Virtual Reality (VR) applications are being developed for training and entertainment, many of which require the user to determining whether s/he can pass through an opening. Typically, people determine their ability to pass through an aperture by comparing the width of their shoulders to the width of the opening. Thus, judgments of size and distance in an IVE are necessary for accurate judgments of passability. In this experiment, we empirically evaluate how passability judgments in an IVE, viewed through a Head-Mounted Display (HMD), compare to judgments made in the real world. An exact to scale virtual replica of the room and apparatus was used for the VR condition. Results indicate that the accuracy of passability judgments seem to be comparable to the real world.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114797805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}