{"title":"Reducing Seasickness in Onboard Marine VR Use through Visual Compensation of Vessel Motion","authors":"A. Stevens, T. Butkiewicz","doi":"10.1109/VR.2019.8797800","DOIUrl":"https://doi.org/10.1109/VR.2019.8797800","url":null,"abstract":"We developed a virtual reality interface for cleaning sonar point cloud data. Experimentally, users performed better when using this VR interface compared to a mouse-and-keyboard with a desktop monitor. However, hydrographers often clean data aboard moving vessels, which can create motion sickness. Users of VR experience motion sickness as well, in the form of simulator sickness. Combining the two is a worst-case scenario for motion sickness. Advice for avoiding seasickness includes focusing on the horizon or objects in the distance, to keep your frame of reference external. We explored moving the surroundings in a virtual environment to match vessel motion, to assess whether it provides similar visual cues that could prevent seasickness. An informal evaluation in a seasickness-inducing simulator was conducted, and subjective preliminary results hint at such compensation's potential for reducing motion sickness, enabling the use of immersive VR technologies aboard underway ships.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Keynote Speaker: Virtual Reality for Enhancing Human Perceptional Diversity Towards an Inclusive Society","authors":"Yoichi Ochiai","doi":"10.1109/vr.2019.8798046","DOIUrl":"https://doi.org/10.1109/vr.2019.8798046","url":null,"abstract":"We conducted research project towards an inclusive society from the viewpoint of the computational assistive technologies. This project aims to explore AI-assisted human-machine integration techniques for overcoming impairments and disabilities. By connecting assistive hardware and auditory/visual/tactile sensors and actuators with a user-adaptive and interactive learning framework, we propose and develop a proof of concept of our “xDiversity AI platform” to meet the various abilities, needs, and demands in our society. For example, one of our studies is a wheelchair for automatic driving using “AI technology” called “tele wheelchair”. Its purpose is not fully automated driving but labor saving at nursing care sites and nursing care by natural communication. These attempts to solve the challenges facing the body and sense organs with the help of AI and others. In this keynote we explain the case studies and out final goal for the social design and deployment of the assistive technologies towards an inclusive society.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121100833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Study in Virtual Reality on (Non-)Gamers‘ Attitudes and Behaviors","authors":"Sebastian Stadler, H. Cornet, F. Frenkler","doi":"10.1109/VR.2019.8797750","DOIUrl":"https://doi.org/10.1109/VR.2019.8797750","url":null,"abstract":"Virtual Reality (VR) constitutes an advantageous alternative for research considering scenarios that are not feasible in real-life conditions. Thus, this technology was used in the presented study for the behavioral observation of participants when being exposed to autonomous vehicles (AVs). Further data was collected via questionnaires before, directly after the experience and one month later to measure the impact that the experience had on participants' general attitude towards AVs. Despite a nonsignificance of the results, first insights suggest that participants with low prior gaming experience were more impacted than gamers. Future work will involve bigger sample size and refined questionnaires.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126852383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimised Molecular Graphics on the HoloLens","authors":"C. Müller, Matthias Braun, T. Ertl","doi":"10.1109/VR.2019.8798111","DOIUrl":"https://doi.org/10.1109/VR.2019.8798111","url":null,"abstract":"The advent of modern and affordable augmented reality head sets like Microsoft HoloLens has sparked new interest in using virtual and augmented reality technology in the analysis of molecular data. For all visualisation in immersive, mixed-reality scenarios, a sufficiently high rendering speed is an important factor, which leads to the issue of limited processing power available on fully untethered devices facing the situation of handling computationally expensive visualisations. Recent research shows that the space-filling model of even small data sets from the Protein Data Bank (PDB) cannot be rendered at desirable frame rates on the HoloLens. In this work, we report on how to improve the rendering speed of atom-based visualisation of proteins and how the rendering of more abstract representations of the molecules compares against it. We complement our findings with in-depth GPU and CPU performance numbers.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"67 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115698116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dense 3D Scene Reconstruction from Multiple Spherical Images for 3-DoF+ VR Applications","authors":"T. L. T. D. Silveira, C. Jung","doi":"10.1109/VR.2019.8798281","DOIUrl":"https://doi.org/10.1109/VR.2019.8798281","url":null,"abstract":"We propose a novel method for estimating the 3D geometry of indoor scenes based on multiple spherical images. Our technique produces a dense depth map registered to a reference view so that depth-image-based-rendering (DIBR) techniques can be explored for providing three-degrees-of-freedom plus immersive experiences to virtual reality users. The core of our method is to explore large displacement optical flow algorithms to obtain point correspondences, and use cross-checking and geometric constraints to detect and remove bad matches. We show that selecting a subset of the best dense matches leads to better pose estimates than traditional approaches based on sparse feature matching, and explore a weighting scheme to obtain the depth maps. Finally, we adapt a fast image-guided filter to the spherical domain for enforcing local spatial consistency, improving the 3D estimates. Experimental results indicate that our method quantitatively outperforms competitive approaches on computer-generated images and synthetic data under noisy correspondences and camera poses. Also, we show that the estimated depth maps obtained from only a few real spherical captures of the scene are capable of producing coherent synthesized binocular stereoscopic views by using traditional DIBR methods.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128232785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. K. Andreasen, Sarune Baceviciute, Prajakt Pande, G. Makransky
{"title":"Virtual Reality Instruction Followed by Enactment Can Increase Procedural Knowledge in a Science Lesson","authors":"N. K. Andreasen, Sarune Baceviciute, Prajakt Pande, G. Makransky","doi":"10.1109/VR.2019.8797755","DOIUrl":"https://doi.org/10.1109/VR.2019.8797755","url":null,"abstract":"A 2×2 between-subjects experiment (a) investigated and compared the instructional effectiveness of immersive virtual reality (VR) versus video as media for teaching scientific procedural knowledge, and (b) examined the efficacy of enactment as a generative learning strategy in combination with the respective instructional media. A total of 117 high school students (74 females) were randomly distributed across four instructional groups — VR and enactment, video and enactment, only VR, and only video. Outcome measures included declarative knowledge, procedural knowledge, knowledge transfer, and subjective ratings of perceived enjoyment. Results indicated that there were no main effects or interactions for the outcomes of declarative knowledge or transfer. However, there was a significant interaction between media and method for the outcome of procedural knowledge with the VR and enactment group having the highest performance. Furthermore, media also seemed to have a significant effect on student perceived enjoyment, indicating that the groups enjoyed the VR simulation significantly more than the video. The results deepen our understanding of how we learn with immersive technology, as well as suggest important implications for implementing VR in schools.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116115544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Fujiwara, Yu Tzu Wu, M. K. Gomes, W. H. A. Silva, C. Suzuki
{"title":"Haptic Interface Based on Optical Fiber Force Myography Sensor","authors":"E. Fujiwara, Yu Tzu Wu, M. K. Gomes, W. H. A. Silva, C. Suzuki","doi":"10.1109/VR.2019.8797788","DOIUrl":"https://doi.org/10.1109/VR.2019.8797788","url":null,"abstract":"A haptic grasp interface based on the force myography technique is reported. The hand movements and forces during the object manipulation are assessed by an optical fiber sensor attached to the forearm, so the virtual contact is computed, and the reaction forces are delivered to the subject by graphical and vibrotactile feedbacks. The system was successfully tested for different objects, providing a non-invasive and realistic approach for applications in virtual-reality environments.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"354 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131750575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Repurposing Labeled Photographs for Facial Tracking with Alternative Camera Intrinsics","authors":"Caio Brito, Kenny Mitchell","doi":"10.1109/VR.2019.8798303","DOIUrl":"https://doi.org/10.1109/VR.2019.8798303","url":null,"abstract":"Acquiring manually labeled training data for a specific application is expensive and while such data is often fully available for casual camera imagery, it is not a good fit for novel cameras. To overcome this, we present a repurposing approach that relies on spherical image warping to retarget an existing dataset of landmark labeled casual photography of people's faces with arbitrary poses from regular camera lenses to target cameras with significantly different intrinsics, such as those often attached to the head mounted displays (HMDs) with wide-angle lenses necessary to observe mouth and other features at close proximity and infrared only sensing for eye observations. Our method can predict landmarks of the HMD wearer in facial sub-regions in a divide-and-conquer fashion with particular focus on mouth and eyes. We demonstrate animated avatars in realtime using the face landmarks as input without user-specific nor application-specific dataset.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132818080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation on a Wheelchair Simulator Using Limited-Motion Patterns and Vection-Inducing Movies","authors":"Akihiro Miyata, Hironobu Uno, Kenro Go","doi":"10.1109/VR.2019.8797726","DOIUrl":"https://doi.org/10.1109/VR.2019.8797726","url":null,"abstract":"Existing virtual reality (VR) based wheelchair simulators have difficulty providing both visual and motion feedback at low cost. To address this issue, we propose a VR-based wheelchair simulator using a combination of motions attainable by an electric-powered wheelchair and vection-inducing movies displayed on a head-mounted display. This approach enables the user to have a richer simulation experience, because the scenes of the movie change as if the wheelchair performs motions that are not actually performable. We developed a proof of concept using only consumer products and conducted evaluation tasks, confirming that our approach can provide a richer experience for barrier simulations.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manuela Chessa, Guido Maiello, Lina K. Klein, Vivian C. Paulun, F. Solari
{"title":"Grasping objects in immersive Virtual Reality","authors":"Manuela Chessa, Guido Maiello, Lina K. Klein, Vivian C. Paulun, F. Solari","doi":"10.1109/VR.2019.8798155","DOIUrl":"https://doi.org/10.1109/VR.2019.8798155","url":null,"abstract":"Grasping is one of the fundamental actions we perform to interact with objects in real environments, and in the real world we rarely experience difficulty picking up objects. Grasping plays a fundamental role for interactive virtual reality (VR) systems that are increasingly employed not only for recreational purposes, but also for training in industrial contexts, in medical tasks, and for rehabilitation protocols. To ensure the effectiveness of such VR applications, we must understand whether the same grasping behaviors and strategies employed in the real world are adopted when interacting with objects in VR. To this aim, we replicated in VR an experimental paradigm employed to investigate grasping behavior in the real world. We tracked participants' forefinger and thumb as they picked up, in a VR environment, unfamiliar objects presented at different orientations, and exhibiting the same physics behavior of their real counterparts. We compared grasping behavior within and across participants, in VR and in the corresponding real world situation. Our findings highlight the similarities and differences in grasping behavior in real and virtual environments.","PeriodicalId":315935,"journal":{"name":"2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133134927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}