{"title":"The effect of head mounted display weight and locomotion method on the perceived naturalness of virtual walking speeds","authors":"N. C. Nilsson, S. Serafin, R. Nordahl","doi":"10.1109/VR.2015.7223389","DOIUrl":"https://doi.org/10.1109/VR.2015.7223389","url":null,"abstract":"This poster details a study investigating the effect of Head Mounted Display (HMD) weight and locomotion method (Walking-In-Place and treadmill walking) on the perceived naturalness of virtual walking speeds. The results revealed significant main effects of movement type, but no significant effects of HMD weight were identified.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128037416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A building-wide indoor tracking system for augmented reality","authors":"S. Côté, F. Rheault, J. Barnard","doi":"10.1109/VR.2015.7223348","DOIUrl":"https://doi.org/10.1109/VR.2015.7223348","url":null,"abstract":"Buildings require regular maintenance, and augmented reality (AR) could advantageously be used to facilitate the process. However, such AR systems would require accurate tracking to meet the needs of engineers, and work accurately in entire buildings. Popular tracking systems based on visual features cannot easily be applied in such situations, because of the limited number of visual features indoor, and of the high degree of similarity between rooms. In this project, we propose a hybrid system combining low accuracy radio-based tracking, and high accuracy tracking using depth images. Results show tracking accuracy that would be compatible with AR applications.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121818214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timofey Grechkin, Mahdi Azmandian, M. Bolas, Evan A. Suma
{"title":"Towards context-sensitive reorientation for real walking in virtual reality","authors":"Timofey Grechkin, Mahdi Azmandian, M. Bolas, Evan A. Suma","doi":"10.1109/VR.2015.7223357","DOIUrl":"https://doi.org/10.1109/VR.2015.7223357","url":null,"abstract":"Redirected walking techniques have been introduced to overcome physical limitations for natural locomotion in virtual reality. Although subtle perceptual manipulations are helpful to keep users within relatively small tracked spaces, it is inevitable that users will approach critical boundary limits. Current solutions to this problem involve breaks in presence by introducing distractors, or freezing the virtual world relative to the user's perspective. We propose an approach that integrates into the virtual world narrative to draw users' attention and to cause them to temporarily alter their course to avoid going off bounds. This method ties together unnoticeable translation, rotation, and curvature gains, efficiently reorienting the user while maintaining the user's sense of immersion. We also discuss how this new method can be effectively used in conjunction with other reorientation techniques.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127373742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AR-SSVEP for brain-machine interface: Estimating user's gaze in head-mounted display with USB camera","authors":"S. Horii, S. Nakauchi, M. Kitazaki","doi":"10.1109/VR.2015.7223361","DOIUrl":"https://doi.org/10.1109/VR.2015.7223361","url":null,"abstract":"We aim to develop a brain-machine interface (BMI) system that estimates user's gaze or attention on an object to pick it up in the real world. In Experiment 1 and 2 we measured steady-state visual evoked potential (SSVEP) using luminance and/or contrast modulated flickers of photographic scenes presented on a head-mounted display (HMD). We applied multiclass SVM to estimate gaze locations for every 2s time-window data, and obtained significantly good classifications of gaze locations with the leave-one-session-out cross validation. In Experiment 3 we measured SSVEP using luminance and contrast modulated flickers of real scenes that were online captured by a USB camera and presented on the HMD. We put AR markers on real objects and made their locations flickering on HMD. We obtained the best performance of gaze classification with highest luminance and contrast modulation (73-91% accuracy at chance level 33%), and significantly good classification with low (25% of the highest) luminance and contrast modulation (42-50% accuracy). These results suggest that the luminance-modulated flickers of real scenes through USB camera can be applied to BMI by using augmented reality technology.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130422341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Pohl, Timo Bolkart, Stefan Nickels, O. Grau
{"title":"Using astigmatism in wide angle HMDs to improve rendering","authors":"Daniel Pohl, Timo Bolkart, Stefan Nickels, O. Grau","doi":"10.1109/VR.2015.7223396","DOIUrl":"https://doi.org/10.1109/VR.2015.7223396","url":null,"abstract":"Lenses in modern consumer HMDs introduce distortions like astigmatism: only the center area of the displayed content can be perceived sharp while with increasing distance from the center the image gets out of focus. We show with three new approaches that this undesired side effect can be used in a positive way to save calculations in blurry areas. For example, using sampling maps to lower the detail in areas where the image is blurred through astigmatism, increases performance by a factor of 2 to 3. Further, we introduce a new calibration of user-specific viewing parameters that increase the performance by about 20-75%.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130440829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neven A. M. ElSayed, B. Thomas, Ross T. Smith, K. Marriott, J. Piantadosi
{"title":"Using augmented reality to support situated analytics","authors":"Neven A. M. ElSayed, B. Thomas, Ross T. Smith, K. Marriott, J. Piantadosi","doi":"10.1109/VR.2015.7223352","DOIUrl":"https://doi.org/10.1109/VR.2015.7223352","url":null,"abstract":"We draw from the domains of Visual Analytics and Augmented Reality to support a new form of in-situ interactive visual analysis. We present a Situated Analytics model, a novel interaction, and a visualization concept for reasoning support. Situated Analytics has four primary elements: situated information, abstract information, augmented reality interaction, and analytical interaction.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132364655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quoc-Dinh Nguyen, A. Devaux, M. Brédif, N. Paparoditis
{"title":"3D heterogeneous interactive web mapping application","authors":"Quoc-Dinh Nguyen, A. Devaux, M. Brédif, N. Paparoditis","doi":"10.1109/VR.2015.7223426","DOIUrl":"https://doi.org/10.1109/VR.2015.7223426","url":null,"abstract":"The internet browsers nowadays show incredible possibilities with HTML5. It makes it possible to use all the power of your device such as the GPU and all its sensors, GPS, accelerometer, camera, etc. The ability to put hardware-accelerated 3D content in the browser provides a way for the creation of new web based applications that were previously the exclusive domain of the desktop environment. This paper introduces a novel implementation of a 3D GIS WebGL-based navigation system which allows end-users to navigate in a 3D realistic and immersive urban scene, to interact with different spatial data such as panoramic image, laser, 3D-city model, and vector data with modern functionalities such as using your smartphone as a remote, render for 3D screen and make the scene dynamic.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Madis Vasser, Markus Kängsepp, Kalver Kilvits, Taavi Kivisik, Jaan Aru
{"title":"Virtual reality toolbox for experimental psychology — Research demo","authors":"Madis Vasser, Markus Kängsepp, Kalver Kilvits, Taavi Kivisik, Jaan Aru","doi":"10.1109/VR.2015.7223445","DOIUrl":"https://doi.org/10.1109/VR.2015.7223445","url":null,"abstract":"We present a general toolbox for virtual reality (VR) research in the field of psychology. Our aim is to simplify the generation and setup of complicated VR scenes for researchers. Various study protocols about perception, attention, cognition and memory can be constructed using our toolbox. Here we specifically showcase a fully functional demo for change blindness phenomena. Video-http://youtu.be/xG1hUYTQQbk.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122400004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distance estimation in large immersive projection systems, revisited","authors":"G. Bruder, F. Sanz, A. Olivier, A. Lécuyer","doi":"10.1109/VR.2015.7223320","DOIUrl":"https://doi.org/10.1109/VR.2015.7223320","url":null,"abstract":"When walking within an immersive projection environment, accommodation distance, parallax and angular resolution vary according to the distance between the user and the projection walls which can influence spatial perception. As CAVE-like virtual environments get bigger, accurate spatial perception within the projection setup becomes increasingly important for application domains that require the user to be able to naturally explore a virtual environment by moving through the physical interaction space. In this paper we describe an experiment which analyzes how distance estimation is biased when the distance to the screen and parallax vary. The experiment was conducted in a large immersive projection setup with up to ten meter interaction space. The results showed that both the screen distance and parallax have a strong asymmetric effect on distance judgments. We found an increased distance underestimation for positive parallax conditions. In contrast, we found less distance overestimation for negative and zero parallax conditions. We conclude the paper discussing the results with view on future large immersive projection environments.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123067782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Jayasiri, Shuhan Ma, Yihan Qian, K. Akahane, Makoto Sato
{"title":"Desktop versions of the string-based haptic interface — SPIDAR","authors":"A. Jayasiri, Shuhan Ma, Yihan Qian, K. Akahane, Makoto Sato","doi":"10.1109/VR.2015.7223364","DOIUrl":"https://doi.org/10.1109/VR.2015.7223364","url":null,"abstract":"There is a vast development and significant involvement of haptic interfaces in the world for virtual reality applications. In this paper, we introduce the research and development of desktop versions of friendly human interface called SPIDAR haptic interfaces on the Sato Makoto Laboratory in the Tokyo Institute of Technology. This haptic interface can be used in various types of virtual reality applications for simple pick and place tasks to more complicated physical interactions in virtual worlds.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128477262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}