2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)最新文献

筛选
英文 中文
Redirected Spaces: Going Beyond Borders 重定向空间:超越边界
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446167
E. Langbehn, Paul Lubos, Frank Steinicke
{"title":"Redirected Spaces: Going Beyond Borders","authors":"E. Langbehn, Paul Lubos, Frank Steinicke","doi":"10.1109/VR.2018.8446167","DOIUrl":"https://doi.org/10.1109/VR.2018.8446167","url":null,"abstract":"Real walking in virtual reality (VR) is a promising locomotion technique since it offers multi-modal feedback to the user. Unfortunately, the virtual environment (VE) is limited by the available space in the physical world. So far, several techniques were developed to overcome this problem, e. g. redirected walking (RDW) and the use of impossible spaces. RDW subtly manipulates the viewpoint of the user to reorient her walking direction. Impossible spaces are based on subtle changes of the VE to reuse the same physical space for different virtual spaces. In this research demonstration, we show how these two approaches of redirected walking and impossible spaces can be combined. In particular, for our implementation we focus on the use of curved corridors that benefits both methods.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128006426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
AirwayVR: Learning Endotracheal Intubation in Virtual Reality AirwayVR:在虚拟现实中学习气管插管
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446075
P. Rajeswaran, Na-Teng Hung, T. Kesavadas, J. Vozenilek, Praveen Kumar
{"title":"AirwayVR: Learning Endotracheal Intubation in Virtual Reality","authors":"P. Rajeswaran, Na-Teng Hung, T. Kesavadas, J. Vozenilek, Praveen Kumar","doi":"10.1109/VR.2018.8446075","DOIUrl":"https://doi.org/10.1109/VR.2018.8446075","url":null,"abstract":"Endotracheal intubation is a procedure in which a tube is passed through the mouth into the trachea (windpipe) to maintain an airway and provide artificial respiration. This lifesaving procedure, utilized in many clinical situations, requires complex psychomotor skills. Healthcare providers need significant training and experience to acquire skills necessary for a quick and atraumatic endotracheal intubation to prevent complications. However, medical professionals have limited training platforms and opportunities to be trained on this procedure. In this poster, we present a virtual reality-based simulation trainer for intubation training. This VR based intubation trainer provides an environment for healthcare professionals to assimilate these complex psychomotor skills while also allowing a safe place to practice swift and atraumatic intubation. User survey results are presented demonstrating that VR is a promising platform to train medical professionals effectively for this procedure.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130199851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Neurophysiology of Visual-Motor Learning During a Simulated Marksmanship Task in Immersive Virtual Reality 沉浸式虚拟现实中模拟射击任务中视觉-运动学习的神经生理学
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446068
J. Clements, Regis Kopper, David J. Zielinski, H. Rao, M. Sommer, Elayna P Kirsch, B. Mainsah, L. Collins, L. G. Appelbaum
{"title":"Neurophysiology of Visual-Motor Learning During a Simulated Marksmanship Task in Immersive Virtual Reality","authors":"J. Clements, Regis Kopper, David J. Zielinski, H. Rao, M. Sommer, Elayna P Kirsch, B. Mainsah, L. Collins, L. G. Appelbaum","doi":"10.1109/VR.2018.8446068","DOIUrl":"https://doi.org/10.1109/VR.2018.8446068","url":null,"abstract":"Immersive virtual reality (VR) systems offer flexible control of an interactive environment, along with precise position and orientation tracking of realistic movements. Immersive VR can also be used in conjunction with neurophysiological monitoring techniques, such as electroencephalography (EEG), to record neural activity as users perform complex tasks. As such, the fusion of VR, kinematic tracking, and EEG offers a powerful testbed for naturalistic neuroscience research. In this study, we combine these elements to investigate the cognitive and neural mechanisms that underlie motor skill learning during a multi-day simulated marksmanship training regimen conducted with 20 participants. On each of 3 days, participants performed 8 blocks of 60 trials in which a simulated clay pigeon was launched from behind a trap house. Participants attempted to shoot the moving target with a firearm game controller, receiving immediate positional feedback and running scores after each shot. Over the course of the 3 days that individuals practiced this protocol, shot accuracy and precision improved significantly while reaction times got significantly faster. Furthermore, results demonstrate that more negative EEG amplitudes produced over the visual cortices correlate with better shooting performance measured by accuracy, reaction times, and response times, indicating that early visual system plasticity underlies behavioral learning in this task. These findings point towards a naturalistic neuroscience approach that can be used to identify neural markers of marksmanship performance.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134334170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
An Approach to Embodiment and Interactions with Digital Entities in Mixed-Reality Environments 混合现实环境中数字实体的体现与交互方法
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446458
Mohamed Handosa, H. Schulze, D. Gračanin, Matthew Tucker, Mark Manuel
{"title":"An Approach to Embodiment and Interactions with Digital Entities in Mixed-Reality Environments","authors":"Mohamed Handosa, H. Schulze, D. Gračanin, Matthew Tucker, Mark Manuel","doi":"10.1109/VR.2018.8446458","DOIUrl":"https://doi.org/10.1109/VR.2018.8446458","url":null,"abstract":"The advances in mixed reality (MR) technologies provide an opportunity to support the deployment and use of MR for training and education. We describe an approach that extends the functionality of the Microsoft HoloLens device to support a wider range of embodied interactions by making use of the Microsoft Kinect V2 device. The embodied interactions can support novel interaction scenarios, especially within the context of training and skills development, thereby removing or reducing the need for training equipment.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133337446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Investigating the Effects of Anthropomorphic Fidelity of Self-Avatars on Near Field Depth Perception in Immersive Virtual Environments 沉浸式虚拟环境中自我化身的拟人化逼真度对近场深度感知的影响研究
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446539
Elham Ebrahimi, Leah S. Hartman, Andrew C. Robb, C. Pagano, Sabarish V. Babu
{"title":"Investigating the Effects of Anthropomorphic Fidelity of Self-Avatars on Near Field Depth Perception in Immersive Virtual Environments","authors":"Elham Ebrahimi, Leah S. Hartman, Andrew C. Robb, C. Pagano, Sabarish V. Babu","doi":"10.1109/VR.2018.8446539","DOIUrl":"https://doi.org/10.1109/VR.2018.8446539","url":null,"abstract":"Immersive Virtual Environments (IVEs) are becoming more accessible and more widely utilized for training. Previous research has shown that the matching of visual and proprioceptive information is important for calibration. While research has demonstrated that self-avatars can enhance ones' sense of presence and improve distance perception, the effects of self-avatar fidelity on near field distance estimations has yet to be investigated. This study tested the effect of avatar fidelity on the accuracy of distance estimations in the near-field. Performance with a virtual avatar was also compared to real-world performance. Three levels of fidelity were tested; 1) an immersive self-avatar with realistic limbs, 2) a low-fidelity self-avatar showing only joint locations, and 3) end-effector only. The results suggest that reach estimations become more accurate as the visual fidelity of the avatar increases, with accuracy for high fidelity avatars approaching real-world performance as compared to low-fidelity and end-effector conditions. In all conditions reach estimations became more accurate after receiving feedback during a calibration phase.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128812560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
A Calibration Method for On-Vehicle AR-HUD System Using Mixed Reality Glasses 基于混合现实眼镜的车载AR-HUD系统标定方法
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446213
Nianchen Deng, Yanqing Zhou, Jiannan Ye, Xubo Yang
{"title":"A Calibration Method for On-Vehicle AR-HUD System Using Mixed Reality Glasses","authors":"Nianchen Deng, Yanqing Zhou, Jiannan Ye, Xubo Yang","doi":"10.1109/VR.2018.8446213","DOIUrl":"https://doi.org/10.1109/VR.2018.8446213","url":null,"abstract":"Calibration is a key step for on-vehicle AR-HUD systems to ensure the augmented information to be correctly viewed by the driver. State-of-art calibration methods require setting up of spatial tracking devices or attaching markers on vehicles, which is time-consuming and error-prone. In this paper, we present a novel multi-viewpoints calibration method for AR-HUD using only a mixed reality glasses such as HoloLens. The full calibration process can be done in one minute and provides high precise calibration result, while no markers need to be attached on vehicle.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131280702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
CarpetVR: The Magic Carpet Meets the Magic Mirror CarpetVR:魔毯遇见魔镜
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446576
V. Lempitsky, Alexander Vakhitov, A. Starostin
{"title":"CarpetVR: The Magic Carpet Meets the Magic Mirror","authors":"V. Lempitsky, Alexander Vakhitov, A. Starostin","doi":"10.1109/VR.2018.8446576","DOIUrl":"https://doi.org/10.1109/VR.2018.8446576","url":null,"abstract":"We present CarpetVR - a new system for marker-based positional tracking suitable for mobile VR (Figure 1). The system utilizes all sensors present on a modern smartphone (a camera, a gyroscope, and an accelerometer) and does not require any additional sensors. CarpetVR uses a single floor marker that we call the magic carpet. CarpetVR augments a standard mobile VR setup with a slanted mirror that can be attached either to the smartphone or to the head mount in front of the smartphone camera. As the person walks over the marker, the smartphone camera is able to see the marker thanks to the reflection in the mirror. Our tracking engine then uses a computer vision module to detect the marker and to estimate the smartphone position with respect to the marker at 40 frames per second. This estimate is integrated with high framerate signals from the gyroscope and the accelerometer. The resulting estimates of the position and the orientation are then used to render the virtual world. Our sensor fusion algorithm ensures minimal-latency tracking with very little jitter.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117189547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teach Me a Story: an Augmented Reality Application for Teaching History in Middle School 教我一个故事:增强现实在中学历史教学中的应用
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446412
Barbara Schiavi, Franck Gechter, Celine Gechter, A. Rizzo
{"title":"Teach Me a Story: an Augmented Reality Application for Teaching History in Middle School","authors":"Barbara Schiavi, Franck Gechter, Celine Gechter, A. Rizzo","doi":"10.1109/VR.2018.8446412","DOIUrl":"https://doi.org/10.1109/VR.2018.8446412","url":null,"abstract":"Augmented Reality (AR) is now more and more widespread in many application fields. Thanks to the new progresses in computer power, AR can now be used widely in education without any expensive additional devices. In this paper is presented the feedback of an experimental protocol using an AR application as an additional support for a History lesson in secondary schools. Even if the technical part has a lead role in the student experience, the most challenging issue is related to the choice of the teaching lesson as itself which must fit several, sometimes contradictory, requirements.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124837424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Scope of Manipulability Sharing: A Case Study for Sports Training 可操作性共享的范围:以运动训练为例
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8446350
Yoshiyuki Tanaka, Tadayoshi Shiokawa, M. Shiokawa
{"title":"Scope of Manipulability Sharing: A Case Study for Sports Training","authors":"Yoshiyuki Tanaka, Tadayoshi Shiokawa, M. Shiokawa","doi":"10.1109/VR.2018.8446350","DOIUrl":"https://doi.org/10.1109/VR.2018.8446350","url":null,"abstract":"Recently, advanced information communication technology and robotic technology have been used for developing a sports-like game application and low-cost interface devices, such as wii-sports, to encourage performing exercises in a room. Such an application can provide an easy-to-understand visual feedback for players by using a virtual reality head-mount display. However, those do not provide quantitative information about the body forms required for training during sports to improve the performance and skill of players. The purpose of this study is to develop an intelligent scope of manipulability sharing considering the dynamic change in the human body form (structure) in sports motion, thus providing an evaluation result based on the manipulability theory for both the player and the instructor in real time. Evaluation tests using a prototype system using a smart glasses and a Kinect sensor are conducted to verify the effectiveness of the proposed scope of manipulability sharing in pitching and batting motions.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126364541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Yea Big, Yea High: A 3D User Interface for Surface Selection by Progressive Refinement in Virtual Environments 大,高:一个3D用户界面的表面选择在虚拟环境中逐步细化
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) Pub Date : 2018-03-18 DOI: 10.1109/VR.2018.8447559
Bret Jackson, Brighten Jelke, Gabriel Brown
{"title":"Yea Big, Yea High: A 3D User Interface for Surface Selection by Progressive Refinement in Virtual Environments","authors":"Bret Jackson, Brighten Jelke, Gabriel Brown","doi":"10.1109/VR.2018.8447559","DOIUrl":"https://doi.org/10.1109/VR.2018.8447559","url":null,"abstract":"We present Yea Big, Yea High - a 3D user interface for surface selection in virtual environments. The interface extends previous selection interfaces that support exploratory visualization and 3D modeling. While these systems primarily focus on selecting single objects, Yea Big, Yea High allows users to select part of a surface mesh, a common task for data analysis, model editing, or annotation. The selection can be progressively refined by physically indicating a region of interest between a user's hands. We describe the design of the interface and key challenges we encountered. We present findings from a case study exploring design choices and use of the system.","PeriodicalId":355048,"journal":{"name":"2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115950533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信