Salvar Sigurdarson, Andrew P. Milne, Daniel Feuereissen, B. Riecke
{"title":"Can physical motions prevent disorientation in naturalistic VR?","authors":"Salvar Sigurdarson, Andrew P. Milne, Daniel Feuereissen, B. Riecke","doi":"10.1109/VR.2012.6180874","DOIUrl":"https://doi.org/10.1109/VR.2012.6180874","url":null,"abstract":"Most virtual reality simulators have a serious flaw: Users tend to get easily lost and disoriented as they navigate. According to the prevailing opinion, this is because of the lack of actual physical motion to match the visually simulated motion: E.g., using HMD-based VR, Klatzky et al. [1] showed that participants failed to update visually simulated rotations unless they were accompanied by physical rotation of the observer, even if passive. If we use more naturalistic environments (but no salient landmarks) instead of just optic flow, would physical motion cues still be needed to prevent disorientation? To address this question, we used a paradigm inspired by Klatzky et al.: After visually displayed passive movements along curved streets in a city environment, participants were asked to point back to where they started. In half of the trials the visually displayed turns were accompanied by a matching physical rotation. Results showed that adding physical motion cues did not improve pointing performance. This suggests that physical motions might be less important to prevent disorientation if visuals are naturalistic enough. Furthermore, unexpectedly two participants consistently failed to update the visually simulated heading changes, even when they were accompanied by physical rotations. This suggests that physical motion cues do not necessarily improve spatial orientation ability in VR (by inducing obligatory spatial updating). These findings have noteworthy implications for the design of effective motion simulators.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126659291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual training for Fear of Public Speaking — Design of an audience for immersive virtual environments","authors":"Sandra Poeschl, N. Döring","doi":"10.1109/VR.2012.6180902","DOIUrl":"https://doi.org/10.1109/VR.2012.6180902","url":null,"abstract":"Virtual Reality technology offers great possibilities for Cognitive Behavioral Therapy on Fear of Public Speaking: Clients can be exposed to virtual fear-triggering stimuli (exposure) and are able to role-play in virtual environments, training social skills to overcome their fear. This poster deals with the design of a realistic virtual presentation scenario based on an observation of a real audience.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123332772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented reality goggles with an integrated tracking system for navigation in neurosurgery","authors":"Ehsan Azimi, J. Doswell, P. Kazanzides","doi":"10.1109/VR.2012.6180913","DOIUrl":"https://doi.org/10.1109/VR.2012.6180913","url":null,"abstract":"Precise tumor identification is crucial in image-guided neurosurgical procedures. With existing navigation systems, the surgeon must turn away from the patient to view the imaging data on a separate monitor. In this study, an innovative system is introduced that illustrates the tumor boundaries precisely augmented on the spot where the tumor is located with regard to the patient. Additionally, it allows the surgeon to track the distal end of the tools contextually, where direct visualization is not possible. In this approach, the tracking system is compact and worn by the surgeon, eliminating the need for additional devices that are bulky and typically limited by line of sight constraints.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120962214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape perception in 3-D scatterplots using constant visual angle glyphs","authors":"Rasmus Stenholt, C. Madsen","doi":"10.1109/VR.2012.6180882","DOIUrl":"https://doi.org/10.1109/VR.2012.6180882","url":null,"abstract":"When viewing 3-D scatterplots in immersive virtual environments, one commonly encountered problem is the presence of clutter, which obscures the view of any structures of interest in the visualization. In order to solve this problem, we propose to render the 3-D glyphs such that they always cover the the same amount of screen space. For perceptual reasons, we call this approach constant visual angle glyphs, or CVA glyphs. The use of CVA glyphs implies some desirable perceptual consequences, which have not been previously described or discussed in existing literature: CVA glyphs not only have the prospect of dealing with clutter, but also the prospect of allowing for a better perception of the continuous shapes of structures in 3-D scatterplots. In a formal user evaluation of CVA glyphs, the results indicate that such glyphs do allow for better perception of shapes in 3-D scatterplots compared to regular perspective glyphs, especially when a large amount of clutter is present. Furthermore, our evaluation revealed that perception of structures in 3-D scatterplots is significantly affected by the volumetric density of the glyphs in the plot.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121666342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Acoustically enriched virtual worlds with minimum effort","authors":"Julia Fröhlich, I. Wachsmuth","doi":"10.1109/VR.2012.6180924","DOIUrl":"https://doi.org/10.1109/VR.2012.6180924","url":null,"abstract":"To improve user experiences and immersion within virtual environments auditory experience has long been claimed to be of notable importance [1]. This paper introduces a framework, in which objects, enriched with information about their sound properties, are being processed to generate virtual sound sources. This is done with an automatic processing of the 3D-scene and therefore minimizes the effort needed to develop a multimodal virtual world. In order to create a comprehensive auditory experience different types of sound sources have to be distinguished. We propose a differentiation into three classes: locally bound static sounds, dynamically created event based sounds, and ambient sounds to create spatial atmosphere.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"108 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131746094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Oberhofer, Jens Grubert, Gerhard Reitmayr
{"title":"Natural feature tracking in JavaScript","authors":"Christoph Oberhofer, Jens Grubert, Gerhard Reitmayr","doi":"10.1109/VR.2012.6180908","DOIUrl":"https://doi.org/10.1109/VR.2012.6180908","url":null,"abstract":"We present an efficient natural feature tracking pipeline solely implemented in JavaScript. It is embedded in a web technology-based Augmented Reality system running plugin-free in web browsers. The evaluation shows that real-time framerates on desktop computers and while on smartphones interactive framerates are achieved.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114192140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smelling screen: Technique to present a virtual odor source at an arbitrary position on a screen","authors":"H. Matsukura, T. Yoneda, H. Ishida","doi":"10.1109/VR.2012.6180915","DOIUrl":"https://doi.org/10.1109/VR.2012.6180915","url":null,"abstract":"A new olfactory display that can present a virtual odor source at an arbitrary position on a two-dimensional screen is proposed in this paper. The proposed device can give a sensation that an odor is emanating from a certain position on the screen. Fans are placed at the four corners of the screen. The airflows generated by the fans are deflected multiple times by making them collide with each other, and are finally directed toward the user from the position of a virtual odor source on the screen. By introducing odor vapor into the airflows, the odor is spread from the virtual odor source toward the user. The position of the virtual odor source can be shifted to an arbitrary position on the screen by adjusting the balance of the airflows from the four fans. The user can freely move his/her head and sniff at various locations. Potential applications of the proposed device include digital signage, video games, and exhibitions in museums. The result of odor-distribution measurement is presented here to show the validity of the device design.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"409 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115922308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingsong Dou, Ying Shi, Jan-Michael Frahm, H. Fuchs, Bill Mauchly, Mod Marathe
{"title":"Room-sized informal telepresence system","authors":"Mingsong Dou, Ying Shi, Jan-Michael Frahm, H. Fuchs, Bill Mauchly, Mod Marathe","doi":"10.1109/VR.2012.6180869","DOIUrl":"https://doi.org/10.1109/VR.2012.6180869","url":null,"abstract":"We present a room-sized telepresence system for informal gatherings rather than conventional meetings. Unlike conventional systems which constrain participants to sit in fixed positions, our system aims to facilitate casual conversations between people in two sites. The system consists of a wall of large flat displays at each of the two sites, showing a panorama of the remote scene, constructed from a multiplicity of color and depth cameras. The main contribution of this paper is a solution that ameliorates the eye contact problem during conversation in typical scenarios while still maintaining a consistent view of the entire room for all participants. We achieve this by using two sets of cameras - a cluster of ”Panorama Cameras” located at the center of the display wall and are used to capture a panoramic view of the entire room, and a set of ”Personal Cameras” distributed along the display wall to capture front views of nearby participants. A robust segmentation algorithm with the assistance of depth cameras and an image synthesis algorithm work together to generate a consistent view of the entire scene. In our experience this new approach generates fewer distracting artifacts than conventional 3D reconstruction methods, while effectively correcting for eye gaze.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123615199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time simulation of blood vessels and connective tissue for microvascular anastomosis training","authors":"E. Sismanidis","doi":"10.1109/VR.2012.6180906","DOIUrl":"https://doi.org/10.1109/VR.2012.6180906","url":null,"abstract":"The following article presents an application for the real-time simulation of blood vessels and connective tissue. The focus lies on collision handling between vessels. The stability of the methods is demonstrated by pulling two vessels together.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125273245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Im-O-Ret: Immersive object retrieval","authors":"P. Pascoal, Alfredo Ferreira, J. Jorge","doi":"10.1109/VR.2012.6180912","DOIUrl":"https://doi.org/10.1109/VR.2012.6180912","url":null,"abstract":"The growing number of three-dimensional (3D) objects stored in digital libraries brought forth the challenge of search in 3D model collections. To address it, several approaches have been developed for 3D object retrieval. However, these approaches traditionally present query results as a list of thumbnails, and fail to take advantage of recent visualization and interaction technologies. In this paper, we propose an approach to 3D object retrieval using immersive VR for query result visualization. Query results are shown in a three-dimensional virtual space as 3D objects and users can explore these results by navigating in this virtual space and manipulating the scattered objects.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131493753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}