2008 IEEE Virtual Reality Conference最新文献

筛选
英文 中文
MIRAGE: A Touch Screen based Mixed Reality Interface for Space Planning Applications MIRAGE:一个基于触摸屏的混合现实界面,用于空间规划应用
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480797
Gun A. Lee, Hyun Kang, Wookho Son
{"title":"MIRAGE: A Touch Screen based Mixed Reality Interface for Space Planning Applications","authors":"Gun A. Lee, Hyun Kang, Wookho Son","doi":"10.1109/VR.2008.4480797","DOIUrl":"https://doi.org/10.1109/VR.2008.4480797","url":null,"abstract":"Space planning is one of the popular applications of VR technology including interior design, architecture design, and factory layout. In order to provide easier and efficient methods to accommodate physical objects into virtual space under plan, we suggest applying mixed reality (MR) interface. Our MR system consists of a video see-through display with a touch screen interface, mounted on a mobile platform, and we use screen space 3D manipulations to arrange virtual objects within the MR scene. Investigating the interface with our prototype implementation, we are convinced that our system will help users to design spaces in more easy and effective way.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126004808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Mixed Reality Approach for Merging Abstract and Concrete Knowledge 抽象与具体知识融合的混合现实方法
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480746
J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, Benjamin C. Lok
{"title":"A Mixed Reality Approach for Merging Abstract and Concrete Knowledge","authors":"J. Quarles, S. Lampotang, I. Fischler, P. Fishwick, Benjamin C. Lok","doi":"10.1109/VR.2008.4480746","DOIUrl":"https://doi.org/10.1109/VR.2008.4480746","url":null,"abstract":"Mixed reality's (MR) ability to merge real and virtual spaces is applied to merging different knowledge types, such as abstract and concrete knowledge. To evaluate whether the merging of knowledge types can benefit learning, MR was applied to an interesting problem in anesthesia machine education. The virtual anesthesia machine (VAM) is an interactive, abstract 2D transparent reality simulation of the internal components and invisible gas flows of an anesthesia machine. It is widely used in anesthesia education. However when presented with an anesthesia machine, some students have difficulty transferring abstract VAM knowledge to the concrete real device. This paper presents the augmented anesthesia machine (AAM). The AAM applies a magic-lens approach to combine the VAM simulation and a real anesthesia machine. The AAM allows students to interact with the real anesthesia machine while visualizing how these interactions affect the internal components and invisible gas flows in the real world context. To evaluate the AAM's learning benefits, a user study was conducted. Twenty participants were divided into either the VAM (abstract only) or AAM (concrete+abstract) conditions. The results of the study show that MR can help users bridge their abstract and concrete knowledge, thereby improving their knowledge transfer into real world domains.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125316769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Integrating Gyroscopes into Ubiquitous Tracking Environments 将陀螺仪集成到无处不在的跟踪环境中
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480802
D. Pustka, Manuel J. Huber, G. Klinker
{"title":"Integrating Gyroscopes into Ubiquitous Tracking Environments","authors":"D. Pustka, Manuel J. Huber, G. Klinker","doi":"10.1109/VR.2008.4480802","DOIUrl":"https://doi.org/10.1109/VR.2008.4480802","url":null,"abstract":"It is widely recognized that inertial sensors, in particular gyroscopes, can improve the latency and accuracy of orientation tracking by fusing the inertial measurements with data from other sensors. In our previous work, we introduced the concepts of spatial relationship graphs and spatial relationship patterns to formally model multi-sensor tracking setups and derive valid applications of well-known algorithms in order to infer new spatial relationships for tracking and calibration. In this work, we extend our approach by providing additional spatial relationship patterns that transform incremental rotations and add gyroscope alignment and fusion. The usefulness of the resulting tracking configurations is evaluated in two different scenarios with both inside-out and outside-in tracking.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130472817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments 使用眼动追踪系统改善虚拟环境中的相机运动和景深模糊效果
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480749
Sébastien Hillaire, A. Lécuyer, R. Cozot, Géry Casiez
{"title":"Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments","authors":"Sébastien Hillaire, A. Lécuyer, R. Cozot, Géry Casiez","doi":"10.1109/VR.2008.4480749","DOIUrl":"https://doi.org/10.1109/VR.2008.4480749","url":null,"abstract":"This paper describes the use of user's focus point to improve some visual effects in virtual environments (VE). First, we describe how to retrieve user's focus point in the 3D VE using an eye-tracking system. Then, we propose the adaptation of two rendering techniques which aim at improving users' sensations during first-person navigation in VE using his/her focus point: (1) a camera motion which simulates eyes movement when walking, i.e., corresponding to vestibulo-ocular and vestibulocollic reflexes when the eyes compensate body and head movements in order to maintain gaze on a specific target, and (2) a depth-of-field (DoF) blur effect which simulates the fact that humans perceive sharp objects only within some range of distances around the focal distance. Second, we describe the results of an experiment conducted to study users' subjective preferences concerning these visual effects during first-person navigation in VE. It showed that participants globally preferred the use of these effects when they are dynamically adapted to the focus point in the VE. Taken together, our results suggest that the use of visual effects exploiting users' focus point could be used in several VR applications involving first- person navigation such as the visit of architectural site, training simulations, video games, etc.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134049314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 112
User-Centered Multimodal Interaction Graph for Design Reviews 设计评审的以用户为中心的多模态交互图
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480810
M. Witzel, G. Conti, R. Amicis
{"title":"User-Centered Multimodal Interaction Graph for Design Reviews","authors":"M. Witzel, G. Conti, R. Amicis","doi":"10.1109/VR.2008.4480810","DOIUrl":"https://doi.org/10.1109/VR.2008.4480810","url":null,"abstract":"This work presents a novel approach to author multimodal interaction dialogue of a VR system according to each users specific preferences. We will show how modalities can be bound together via a bidirectional graph in an authoring tool to allow the specification of application-specific domain commands without hardwiring them to the application. As a result we provide a persistent definition of the used modalities outside the application. This is done through the adoption of a so-called \";interaction graph\"; whose nodes and edges represent the dialogue of the user with the system. The application then identifies interaction patterns by matching the path within the graph that represents the actions the user wants to perform within the application.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115409477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
3D Virtual Haptic Cone for Intuitive Vehicle Motion Control 直观车辆运动控制的三维虚拟触觉锥
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/3DUI.2008.4476594
B. Horan, Z. Najdovski, S. Nahavandi, E. Tunstel
{"title":"3D Virtual Haptic Cone for Intuitive Vehicle Motion Control","authors":"B. Horan, Z. Najdovski, S. Nahavandi, E. Tunstel","doi":"10.1109/3DUI.2008.4476594","DOIUrl":"https://doi.org/10.1109/3DUI.2008.4476594","url":null,"abstract":"Haptic technology provides the ability for a system to recreate the sense of touch to a human operator, and as such offers wide reaching advantages. The ability to interact with the human's tactual modality introduces haptic human-machine interaction to replace or augment existing mediums such as visual and audible information. A distinct advantage of haptic human-machine interaction is the intrinsic bilateral nature, where information can be communicated in both directions simultaneously. This paper investigates the bilateral nature of the haptic interface in controlling the motion of a remote (or virtual) vehicle and presents the ability to provide an additional dimension of haptic information to the user over existing approaches (Park et al., 2006; Lee et al., 2002; and Horan et al., 2007). The 3D virtual haptic cone offers the ability to not only provide the user with relevant haptic augmentation pertaining to the task at hand, as do existing approaches, however, to also simultaneously provide an intuitive indication of the current velocities being commanded.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115607256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Rapid Creation of Large-scale Photorealistic Virtual Environments 快速创建大规模逼真的虚拟环境
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480767
Charalambos (Charis) Poullis, Suya You, U. Neumann
{"title":"Rapid Creation of Large-scale Photorealistic Virtual Environments","authors":"Charalambos (Charis) Poullis, Suya You, U. Neumann","doi":"10.1109/VR.2008.4480767","DOIUrl":"https://doi.org/10.1109/VR.2008.4480767","url":null,"abstract":"The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel parameterized geometric primitive is presented for the automatic building detection, identification and reconstruction of building structures. In addition, buildings with complex roofs containing non-linear surfaces are reconstructed interactively using a nonlinear primitive. Secondly, we present a rendering pipeline for the composition of photorealistic textures which unlike existing techniques it can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial and satellite).","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131062893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Envisor: Online Environment Map Construction for Mixed Reality Envisor:用于混合现实的在线环境地图构建
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480745
S. DiVerdi, Jason Wither, Tobias Höllerer
{"title":"Envisor: Online Environment Map Construction for Mixed Reality","authors":"S. DiVerdi, Jason Wither, Tobias Höllerer","doi":"10.1109/VR.2008.4480745","DOIUrl":"https://doi.org/10.1109/VR.2008.4480745","url":null,"abstract":"One of the main goals of anywhere augmentation is the development of automatic algorithms for scene acquisition in augmented reality systems. In this paper, we present Envisor, a system for online construction of environment maps in new locations. To accomplish this, Envisor uses vision-based frame to frame and landmark orientation tracking for long-term, drift-free registration. For additional robustness, a gyroscope/compass orientation unit can optionally be used for hybrid tracking. The tracked video is then projected into a cubemap frame by frame. Feedback is presented to the user to help avoid gaps in the cubemap, while any remaining gaps are filled by texture diffusion. The resulting environment map can be used for a variety of applications, including shading of virtual geometry and remote presence.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131078165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Spatial Electronic Mnemonics for Augmentation of Human Memory 增强人类记忆的空间电子助记法
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480777
Y. Ikei, Hirofumi Ota
{"title":"Spatial Electronic Mnemonics for Augmentation of Human Memory","authors":"Y. Ikei, Hirofumi Ota","doi":"10.1109/VR.2008.4480777","DOIUrl":"https://doi.org/10.1109/VR.2008.4480777","url":null,"abstract":"In this paper we propose a novel approach to augmenting human memory based on spatial and graphic information using wearable and smartphone devices. Mnemonics is a technique for memorizing a number of unstructured items that has been known for more than two millennia and was used in ancient Greece. Although its utility is remarkable, acquiring the skill to take advantage of mnemonics is generally difficult. In this study we propose a new method of increasing the effectiveness of classic mnemonics by facilitating the process of memorizing and applying mnemonics. The spatial electronic mnemonics (SROM) proposed here is partly based on an ancient technique that utilizes locations and images that reflect the characteristics of human memory. We first present the design of the SROM as a working hypothesis that augments traditional mnemonics using a portable computer. Then an augmented virtual memory peg (vMPeg) that incorporates a graphic numeral and a photograph of a location is introduced as a first implementation for generating a vMPeg. In an experiment, subjects exhibited remarkable retention of the vMPegs over a long interval. The second phase of placing items to remember on a generated vMPeg was also examined in a preliminary experiment, which also indicated good subject performance. In addition to evaluating the SROM by anylysing the scores for correct recall, a subjective evaluation was performed to investigate the nature of the SROM.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122120282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Hybrid Feature Tracking and User Interaction for Markerless Augmented Reality 无标记增强现实的混合特征跟踪和用户交互
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480766
Taehee Lee, Tobias Höllerer
{"title":"Hybrid Feature Tracking and User Interaction for Markerless Augmented Reality","authors":"Taehee Lee, Tobias Höllerer","doi":"10.1109/VR.2008.4480766","DOIUrl":"https://doi.org/10.1109/VR.2008.4480766","url":null,"abstract":"We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking methods. Distinctive image features of the scene are detected and tracked frame- to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a multi-threaded manner for capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce a user interaction for establishing a global coordinate system and for locating virtual objects in the AR environment. A user's bare hand is used for the user interface by estimating a camera pose relative to the user's outstretched hand. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments using hands for interaction.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124010279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 92
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信