Elham Ebrahimi, Bliss M. Altenhoff, C. Pagano, Sabarish V. Babu, J. A. Jones
{"title":"Investigating the impact of perturbed visual and proprioceptive information in near-field immersive virtual environment","authors":"Elham Ebrahimi, Bliss M. Altenhoff, C. Pagano, Sabarish V. Babu, J. A. Jones","doi":"10.1109/VR.2015.7223350","DOIUrl":"https://doi.org/10.1109/VR.2015.7223350","url":null,"abstract":"We report the results of an empirical evaluation to examine the carryover effects of calibrations to one of three perturbations of visual and proprioceptive feedback: i) Minus condition (-20% gain) in which a visual stylus appeared at 80% of the distance of a physical stylus, ii) Neutral condition (0% gain) in which a visual stylus was co-located with a physical stylus, and iii) Plus condition (+20% gain) in which the visual stylus appeared at 120% of the distance of the physical stylus. Feedback was shown to calibrate distance judgments quickly within an IVE, with estimates being farthest after calibrating to visual information appearing nearer (Minus condition), and nearest after calibrating to visual information appearing further (Plus condition).","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124660090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual reality training of manual procedures in the nuclear sector","authors":"J. Cíger, Mehdi Sbaouni, Christian Segot","doi":"10.1109/VR.2015.7223455","DOIUrl":"https://doi.org/10.1109/VR.2015.7223455","url":null,"abstract":"A glove box simulator is presented for a safe and cost-effective training of the operators in the nuclear industry. The focus is on learning of the proper safety procedures and correct maintenance of a glove box in the presence of potentially radioactive substances. Two common situations are explored - operator working in the glove box and operator performing maintenance on it.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129897699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A real-time welding training system base on virtual reality","authors":"Benkai Xie, Qiang Zhou, Liang Yu","doi":"10.1109/VR.2015.7223419","DOIUrl":"https://doi.org/10.1109/VR.2015.7223419","url":null,"abstract":"Onew360 is a training simulator for simulating gas metal arc welding (GMAW) welding. This system is comprised of standard welding hardware components (helmet, gun, work-piece), a PC, a head-mounted display, a tracking system for both the torch and the user's head, and external audio speakers. The track model of welding simulator using single-camera vision measurement technology to calculate the position of the welding gun and helmet, and the simulation model using simple model method to simulate the weld geometry based on the orientation and speed of the welding torch. So that the system produce a realistic, interactive, and immersive welding experience.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114525170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3DTouch: A wearable 3D input device for 3D applications","authors":"Anh M Nguyen, Amy Banic","doi":"10.1109/VR.2015.7223451","DOIUrl":"https://doi.org/10.1109/VR.2015.7223451","url":null,"abstract":"3D applications appear in every corner of life in the current technology era. There is a need for an ubiquitous 3D input device that works with many different platforms, from head-mounted displays (HMDs) to mobile touch devices, 3DTVs, and even the Cave Automatic Virtual Environments. We present 3DTouch [1], a novel wearable 3D input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally works across various 3D platforms. This video presents a working prototype of our solution, which is described in details in the paper [1]. Our approach relies on a relative positioning technique using an optical laser sensor (OPS) and a 9-DOF inertial measurement unit (IMU). The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. We envision that modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116262524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applying latency to half of a self-avatar's body to change real walking patterns","authors":"G. Samaraweera, A. Perdomo, J. Quarles","doi":"10.1109/VR.2015.7223329","DOIUrl":"https://doi.org/10.1109/VR.2015.7223329","url":null,"abstract":"Latency (i.e., time delay) in a Virtual Environment is known to disrupt user performance, presence and induce simulator sickness. However, can we utilize the effects caused by experiencing latency to benefit virtual rehabilitation technologies? We investigate this question by conducting an experiment that is aimed at altering gait by introducing latency applied to one side of a self-avatar with a front-facing mirror. This work was motivated by previous findings where participants altered their gait with increasing latency, even when participants failed to notice considerably high latencies as 150ms or 225ms. In this paper, we present the results of a study that applies this novel technique to average healthy persons (i.e., to demonstrate the feasibility of the approach before applying it to persons with disabilities). The results indicate a tendency to create asymmetric gait in persons with symmetric gait when latency is applied to one side of their self-avatar. Thus, the study shows the potential of applying one-sided latency in a self-avatar, which could be used to develop asymmetric gait rehabilitation techniques.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121557855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Missie Smith, Nadejda Doutcheva, Joseph L. Gabbard, G. Burnett
{"title":"Optical see-through head up displays' effect on depth judgments of real world objects","authors":"Missie Smith, Nadejda Doutcheva, Joseph L. Gabbard, G. Burnett","doi":"10.1109/VR.2015.7223465","DOIUrl":"https://doi.org/10.1109/VR.2015.7223465","url":null,"abstract":"Recent research indicates that users consistently underestimate depth judgments to Augmented Reality (AR) graphics when viewed through optical see-through displays. However, to our knowledge, little work has examined how AR graphics may affect depth judgments of real world objects that have been overlaid or annotated with AR graphics. This study begins a preliminary analysis whether AR graphics have directional effects on users' depth perception of real-world objects, as might be experienced in vehicle driving scenarios (e.g., as viewed via an optical see-through head-up display or HUD). Twenty-four participants were asked to judge the depth of a physical pedestrian proxy figure moving towards them at a constant rate of 1 meter/second. Participants were shown an initial target location that varied in distance from 11 to 20 m and were then asked to press a button to indicate when the moving target was perceived to be at the previously specified target location. Each participant experienced three different display conditions: no AR visual display (control), a conformal AR graphic overlaid on the pedestrian via a HUD, and the same graphic presented on a tablet physically located on the pedestrian. Participants completed 10 trials (one for each target distance between 11 and 20 inclusive) per display condition for a total of 30 trials per participant. The judged distance from the correct location was recorded, and after each trial, participants' confidence in determining the correct distance was captured. Across all conditions, participants underestimated the distance of the physical object consistent with existing literature. Greater variability was observed in the accuracy of distance judgments under the AR HUD condition relative to the other two display conditions. In addition, participant confidence levels were considerably lower in the AR HUD condition.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127671468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low cost virtual reality for medical training","authors":"A. Mathur","doi":"10.1109/VR.2015.7223437","DOIUrl":"https://doi.org/10.1109/VR.2015.7223437","url":null,"abstract":"This demo depicts a low cost virtual reality set-up that may be used for medical training and instruction purposes. Using devices such as the Oculus Rift and Razer Hydra, an immersive experience, including hand interactivity can be given. Software running on a PC integrates these devices and presents an interactive and immersive training environment, where trainees are asked to perform a mixed bag of both, simple and complex tasks. These tasks range from identification of certain organs to performing of an actual incision. Trainees learn by doing, albeit in the virtual world. Components of the system are relatively affordable and simple to use, thereby making such a set-up incredibly easy to deploy.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128037838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Three-dimensional VR interaction using the movement of a mobile display","authors":"Lili Wang, T. Komuro","doi":"10.1109/VR.2015.7223446","DOIUrl":"https://doi.org/10.1109/VR.2015.7223446","url":null,"abstract":"In this study, we propose a VR system for allowing various types of interaction with virtual objects using an autostereoscopic mobile display and an accelerometer. The system obtains the orientation and motion information from the accelerometer attached to the mobile display and reflects them to the motion of virtual objects. It can present 3D images with motion parallax by estimating the position of the user's viewpoint and by displaying properly projected images. Furthermore, our method enables to connect the real space and the virtual space seamlessly through the mobile display by determining the coordinate system so that one of the horizontal surfaces in the virtual space coincides with the display surface. To show the effectiveness of this concept, we implemented an application to simulate food cooking by regarding the mobile display as a frying pan.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128113572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wings and flying in immersive VR — Controller type, sound effects and experienced ownership and agency","authors":"Erik Sikström, Amalia de Götzen, S. Serafin","doi":"10.1109/VR.2015.7223405","DOIUrl":"https://doi.org/10.1109/VR.2015.7223405","url":null,"abstract":"An experiment investigated the subjective experiences of ownership and agency of a pair of virtual wings attached to a motion controlled avatar in an immersive virtual reality setup. A between groups comparison of two ways of controlling the movement of the wings and flight ability. One where the subjects achieved the wing motion and flight ability by using a hand-held video game controller and the other by moving the shoulder. Through four repetitions of a flight task with varying amounts of self-produced audio feedback (from the movement of the virtual limbs), the subjects evaluated their experienced embodiment of the wings on a body ownership and agency questionnaire. The results shows significant differences between the controllers in some of the questionnaire items and that adding self-produced sounds to the avatar, slightly changed the subjects evaluations.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131374909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sascha Gebhardt, S. Pick, H. Voet, J. Utsch, T. A. Khawli, U. Eppelt, R. Reinhard, Chris Buescher, B. Hentschel, T. Kuhlen
{"title":"flapAssist: How the integration of VR and visualization tools fosters the factory planning process","authors":"Sascha Gebhardt, S. Pick, H. Voet, J. Utsch, T. A. Khawli, U. Eppelt, R. Reinhard, Chris Buescher, B. Hentschel, T. Kuhlen","doi":"10.1109/VR.2015.7223355","DOIUrl":"https://doi.org/10.1109/VR.2015.7223355","url":null,"abstract":"Virtual Reality (VR) systems are of growing importance to aid decision support in the context of the digital factory, especially factory layout planning. While current solutions either focus on virtual walkthroughs or the visualization of more abstract information, a solution that provides both, does currently not exist. To close this gap, we present a holistic VR application, called Factory Layout Planning Assistant (flapAssist). It is meant to serve as a platform for planning the layout of factories, while also providing a wide range of analysis features. By being scalable from desktops to CAVEs and providing a link to a central integration platform, flapAssist integrates well in established factory planning workflows.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134591370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}