{"title":"ARBlocks: A projective augmented reality platform for educational activities","authors":"R. Roberto, V. Teichrieb","doi":"10.1109/VR.2012.6180937","DOIUrl":"https://doi.org/10.1109/VR.2012.6180937","url":null,"abstract":"This demonstration will allow visitors to use different applications builded for the ARBlocks, a dynamic blocks platform based on projective augmented reality and tangible user interfaces aiming early childhood educational activities. Those applications, along with the platform itself, were designed to be useful tools for educators to teach general subjects for children, such as mathematical and language skills, as well as develop important abilities, like motor coordination and collaboration.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133571954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Riecke, Daniel Feuereissen, J. Rieser, T. McNamara
{"title":"Self-motion illusions (vection) in VR — Are they good for anything?","authors":"B. Riecke, Daniel Feuereissen, J. Rieser, T. McNamara","doi":"10.1109/VR.2012.6180875","DOIUrl":"https://doi.org/10.1109/VR.2012.6180875","url":null,"abstract":"When we locomote through real or virtual environments, self-to-object relationships constantly change. Nevertheless, in real environments we effortlessly maintain an ongoing awareness of roughly where we are with respect to our immediate surrounds, even in the absence of any direct perceptual support (e.g., in darkness or with eyes closed). In virtual environments, however, we tend to get lost far more easily. Why is that? Research suggests that physical motion cues are critical in facilitating this “automatic spatial updating” of the self-to-surround relationships during perspective changes. However, allowing for full physical motion in VR is costly and often unfeasible. Here, we demonstrated for the first time that the mere illusion of self-motion (“circular vection”) can provide a similar benefit as actual self-motion: While blindfolded, participants were asked to imagine facing new perspectives in a well-learned room, and point to previously-learned objects. As expected, this task was difficult when participants could not physically rotate to the instructed perspective. Performance was significantly improved, however, when they perceived illusory self-rotation to the novel perspective (even though they did not physically move). This circular vection was induced by a combination of rotating sound fields (“auditory vection”) and biomechanical vection from stepping along a carrousel-like rotating floor platter. In summary, illusory self-motion was shown to indeed facilitate perspective switches and thus spatial orientation. These findings have important implications for both our understanding of human spatial cognition and the design of more effective yet affordable VR simulators. In fact, it might ultimately enable us to relax the need for physical motion in VR by intelligently utilizing self-motion illusions.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"24 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120824708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented reality for forward-looking synthetic aperture radar","authors":"L. Nguyen, F. Koenig","doi":"10.1109/VR.2012.6180923","DOIUrl":"https://doi.org/10.1109/VR.2012.6180923","url":null,"abstract":"The U.S. Army Research Laboratory (ARL) has successfully designed and integrated an augmented reality system into our vehicle-based ultra wideband (UWB) forward-looking synthetic aperture radar (SAR). In this paper, we present the overall architecture of the system and results from our recent experiment.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124080651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The effect of isolated disparity on depth perception in real and virtual environments","authors":"Abdeldjallil Naceri, R. Chellali","doi":"10.1109/VR.2012.6180905","DOIUrl":"https://doi.org/10.1109/VR.2012.6180905","url":null,"abstract":"In this paper, we investigated depth perception in real and virtual environments when binocular disparity is the sole distance cue. The observers were asked to estimate the relative depth of spheres verbally in virtual and actual environments. Constant apparent sized stimuli were used to measure the just-noticeable difference in depth perception, thus avoiding providing a size gradient cue. Results of the experiments revealed individual differences in virtual reality in contrast to reality. Specifically a subgroup of observers had difficulty perceiving the depth of virtual spheres in virtual reality, which may indicate that they rely on apparent size for judging depth. Furthermore, the just-noticeable differences were more variable in the virtual environment than with real objects. Our results reveal individual differences when the disparity-driven convergence cue is the only distance cue provided in virtual reality.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128101543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Blum, Valerie Kleeberger, Christoph Bichlmeier, N. Navab
{"title":"mirracle: An augmented reality magic mirror system for anatomy education","authors":"T. Blum, Valerie Kleeberger, Christoph Bichlmeier, N. Navab","doi":"10.1109/VR.2012.6180909","DOIUrl":"https://doi.org/10.1109/VR.2012.6180909","url":null,"abstract":"We present an augmented reality magic mirror for teaching anatomy. The system uses a depth camera to track the pose of a user standing in front of a large display. A volume visualization of a CT dataset is augmented onto the user, creating the illusion that the user can look into his body. Using gestures, different slices from the CT and a photographic dataset can be selected for visualization. In addition, the system can show 3D models of organs, text information and images about anatomy. For interaction with this data we present a new interaction metaphor that makes use of the depth camera. The visibility of hands and body is modified based on the distance to a virtual interaction plane. This helps the user to understand the spatial relations between his body and the virtual interaction plane.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"08 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124370651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Influence analysis of visual stimuli on localization of tactile stimuli in augmented reality","authors":"Arinobu Niijima, T. Ogawa","doi":"10.1109/VR.2012.6180904","DOIUrl":"https://doi.org/10.1109/VR.2012.6180904","url":null,"abstract":"In augmented reality (AR) environment, tactile stimuli as if a user touched virtual objects are important for realizing natural interactions. Most previous works employ tactile devices such as vibration actuators. However, the place where the system can stimulate a user depends on the place which is covered by the devices. In addition the accuracy of user's tactile perception is low, so it is difficult to present tactile feedback at intended locations. The purpose of this study is to establish a method that represents smooth two-dimensional tactile moving strokes independent of locations of the tactile devices. Our aim is to let a user perceive that locations of vibrotactile perception are on those of visual stimuli. Thus we can present tactile feedback in larger places with higher resolution by controlling visual stimuli. In this paper we have investigated the correlation between visual stimuli and tactile perception. The results of experiments showed that visual stimuli can induce tactile illusion, which resembles the position of virtual objects.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121283246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Livingston, J. Sebastian, Zhuming Ai, Jonathan W. Decker
{"title":"Performance measurements for the Microsoft Kinect skeleton","authors":"M. Livingston, J. Sebastian, Zhuming Ai, Jonathan W. Decker","doi":"10.1109/VR.2012.6180911","DOIUrl":"https://doi.org/10.1109/VR.2012.6180911","url":null,"abstract":"The Microsoft Kinect for Xbox 360 (“Kinect”) provides a convenient and inexpensive depth sensor and, with the Microsoft software development kit, a skeleton tracker (Figure 2). These have great potential to be useful as virtual environment (VE) control interfaces for avatars or for viewpoint control. In order to determine its suitability for our applications, we devised and conducted tests to measure standard performance specifications for tracking systems. We evaluated the noise, accuracy, resolution, and latency of the skeleton tracking software. We also measured the range in which the person being tracked must be in order to achieve these values.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124395338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reinhold Scherer, M. Pröll, B. Allison, G. Müller-Putz
{"title":"New input modalities for modern game design and virtual embodiment","authors":"Reinhold Scherer, M. Pröll, B. Allison, G. Müller-Putz","doi":"10.1109/VR.2012.6180932","DOIUrl":"https://doi.org/10.1109/VR.2012.6180932","url":null,"abstract":"Brain-computer interface (BCI) systems are not often used as input devices for modern games, due largely to their low bandwidth. However, BCIs can become a useful input modality when adapting the dynamics of the brain-game interaction, as well as combining them with devices based on other physiological signal to make BCIs more powerful and flexible. We introduce the Graz BCI Game Controller (GBGC) and describe how techniques such as context dependence, dwell timers and other intelligent software tools were implemented in a new system to control the Massive Multiplayer Online Role Playing Game World of Warcraft (WoW).","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116968701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Head-turning approach to eye-tracking in immersive virtual environments","authors":"A. Sherstyuk, Arindam Dey, C. Sandor","doi":"10.1109/VR.2012.6180919","DOIUrl":"https://doi.org/10.1109/VR.2012.6180919","url":null,"abstract":"Reliable and unobtrusive eye tracking remains a technical challenge for immersive virtual environment, especially when Head Mounted Displays (HMD) are used for visualization and users are allowed to move freely in the environment. In this work, we provide experimental evidence that gaze direction can be safely approximated by user head rotation, in HMD-based Virtual Reality (VR) applications, where users actively interact with the environment. We discuss the application range of our approach and consider possible extensions.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117048750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sensor-fusion walking-in-place interaction technique using mobile devices","authors":"Ji-Sun Kim, D. Gračanin, Francis K. H. Quek","doi":"10.1109/VR.2012.6180876","DOIUrl":"https://doi.org/10.1109/VR.2012.6180876","url":null,"abstract":"This paper describes a sensor-fusion-based wireless walking-in-place (WIP) interaction technique. We devised a new human-walking detection algorithm that is based on a sensor-fusion using both acceleration and magnetic sensors integrated within a smart phone. Our sensor-fusion approach can be useful for the cases when the detection capability of a single sensor is limited to a certain range of walking speeds, when a system power source is limited, and/or when computation power is limited. The proposed algorithm is versatile enough to handle possible data-loss and random delay in the wireless communication environment, resulting in reduced wireless communication load and computation overhead. The initial study demonstrated that the algorithm can detect dynamic speeds of human walking. The algorithm can be implemented on any mobile device equipped with magnetic and acceleration sensors.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"11 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126290561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}