2008 IEEE Virtual Reality Conference最新文献

筛选
英文 中文
Redgraph: Navigating Semantic Web Networks using Virtual Reality Redgraph:使用虚拟现实导航语义Web网络
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480789
H. Halpin, David J. Zielinski, R. Brady, Glenda Kelly
{"title":"Redgraph: Navigating Semantic Web Networks using Virtual Reality","authors":"H. Halpin, David J. Zielinski, R. Brady, Glenda Kelly","doi":"10.1109/VR.2008.4480789","DOIUrl":"https://doi.org/10.1109/VR.2008.4480789","url":null,"abstract":"We present Redgraph, a generic virtual reality (VR) visualization program for network data, based on resource description framework (RDF), the primary standard of data underlying the semantic Web. Redgraph bypasses a number of problems in 3D graph visualization by relying on users to interactively \"extrude\" a 2D network into the third dimension. This pilot study applies Redgraph to data from the U.S. Patent and Trademark Office to explore innovations in the history of computer science. Comparison of subjects' response times utilizing 3D pull-out vs. 2D strategies on tasks involving fine grained connectivity or broad network observation found that subjects were faster correctly answering questions involving fine-grained connectivity using 3D strategies, particularly when data was densely clustered. Subjects' qualitative feedback suggests that the most valuable application of this 3D technique lies in untimed exploration to discover relationships in the underlying data structure.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117267801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi-dimensional Interactive City Exploration through Mixed Reality 基于混合现实的多维互动城市探索
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480790
I. Herbst, Anne-Kathrin Braun, Rod McCall, W. Broll
{"title":"Multi-dimensional Interactive City Exploration through Mixed Reality","authors":"I. Herbst, Anne-Kathrin Braun, Rod McCall, W. Broll","doi":"10.1109/VR.2008.4480790","DOIUrl":"https://doi.org/10.1109/VR.2008.4480790","url":null,"abstract":"In this paper we present a pervasive outdoor mixed reality edutainment game for exploring the history of a city in the spatial and the temporal dimension, which will closely couple the real environment with the virtual content. The game provides a new and unique user experience, which links rich interactive content to time and places. We introduce the development of such a game, including a universal mechanism to define and setup multi-modal user interfaces for game challenges.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123588150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Showing Users the Way: Signs in Virtual Worlds 给用户指路:虚拟世界中的标志
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480763
D. Cliburn, Stacy Rilea
{"title":"Showing Users the Way: Signs in Virtual Worlds","authors":"D. Cliburn, Stacy Rilea","doi":"10.1109/VR.2008.4480763","DOIUrl":"https://doi.org/10.1109/VR.2008.4480763","url":null,"abstract":"In this paper, we report the results of a pilot study designed to evaluate the impact of signs as navigation aids in virtual worlds. Test subjects were divided into three groups (no aid, a dynamic electronic map, and signs) and asked to search a virtual building four times for six differently colored spheres. The spheres were in the same locations each time, and subjects were allowed to locate them in any order. A statistical analysis of the data revealed that on the first and second trials subjects took nearly four times as long to find the spheres with no aid present, compared to with maps and signs. We then compared only the sign and map conditions. Overall, subjects who navigated the world with the aid of signs were significantly faster than those who were provided with a map. While more research into the use of signs in virtual worlds is necessary, these results indicate that for at least some environments subjects are able to locate targets more quickly when using signs than maps.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116620086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Distance education system for teaching manual skills in endoscopic paranasal sinus surgery using "hypermirror" telecommunication interface 采用“hypermirror”通信接口的鼻窦内窥镜手术手工技能远程教学系统
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480779
Toru Kumagai, Juli Yamashita, Osamu Morikawa, K. Yokoyama, Shin'ichi Fujimaki, Taku Konishi, Hiroshi Ishimasa, H. Murata, K. Tomoda
{"title":"Distance education system for teaching manual skills in endoscopic paranasal sinus surgery using \"hypermirror\" telecommunication interface","authors":"Toru Kumagai, Juli Yamashita, Osamu Morikawa, K. Yokoyama, Shin'ichi Fujimaki, Taku Konishi, Hiroshi Ishimasa, H. Murata, K. Tomoda","doi":"10.1109/VR.2008.4480779","DOIUrl":"https://doi.org/10.1109/VR.2008.4480779","url":null,"abstract":"We have developed a distance education system for developing skills in endoscopic paranasal sinus surgery, to enable efficient remote training of novices in manual skills such as their standing position and posture, and the insertion angle/depth and holding of surgical instruments. The system uses a precise model of human paranasal sinuses and the \"HyperMirror\" (HM) telecommunication interface. HM is a virtual mirror allowing clear visualization of differences in manual operation between the trainee and remote expert. This paper outlines the proposed system and describes remote training experiments between two locations 200 miles apart. In the experiments, two expert surgeons trained 17 novices for 40 to 60 min on probing of the nasofrontal duct and aspiration of the maxillary sinus, and subjectively evaluated their manual skills. The results showed that most of the novices improved their manual skills and were able to complete each procedure.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123989366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Massively Multiplayer Online Worlds as a Platform for Augmented Reality Experiences 作为增强现实体验平台的大型多人在线世界
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480752
Tobias Lang, B. MacIntyre, Iker Jamardo Zugaza
{"title":"Massively Multiplayer Online Worlds as a Platform for Augmented Reality Experiences","authors":"Tobias Lang, B. MacIntyre, Iker Jamardo Zugaza","doi":"10.1109/VR.2008.4480752","DOIUrl":"https://doi.org/10.1109/VR.2008.4480752","url":null,"abstract":"Massively Multiplayer Online Worlds (MMOs) are persistent virtual environments where people play, experiment and socially interact. In this paper, we demonstrate that MMOs also provide a powerful platform for Augmented Reality (AR) applications, where we blend together locations in physical space with corresponding places in the virtual world. We introduce the notion of AR stages, which are persistent, evolving spaces that encapsulate AR experiences in online three-dimensional virtual worlds. We discuss the concepts and technology necessary to use an MMO for AR, including a novel set of design concepts aimed at keeping such a system easy to learn and use. By leveraging the features of the commercial MMO Second Life, we have created a powerful AR authoring environment accessible to a large, diverse set of users.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121731712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Object-Capability Security in Virtual Environments 虚拟环境中的对象能力安全性
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480750
Martin Scheffler, J. P. Springer, B. Fröhlich
{"title":"Object-Capability Security in Virtual Environments","authors":"Martin Scheffler, J. P. Springer, B. Fröhlich","doi":"10.1109/VR.2008.4480750","DOIUrl":"https://doi.org/10.1109/VR.2008.4480750","url":null,"abstract":"Access control is an important aspect of shared virtual environments. Resource access may not only depend on prior authorization, but also on context of usage such as distance or position in the scene graph hierarchy. In virtual worlds that allow user-created content, participants must be able to define and exchange access rights to control the usage of their creations. Using object capabilities, fine-grained access control can be exerted on the object level. We describe our experiences in the application of the object-capability model for access control to object-manipulation tasks common to collaborative virtual environments. We also report on a prototype implementation of an object-capability safe virtual environment that allows anonymous, dynamic exchange of access rights between users, scene elements, and autonomous actors.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131877099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Camera Parameter Estimation Method Using Infrared Markers for Live TV Production 基于红外标记的电视直播摄像机参数估计方法
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480798
H. Mitsumine, Y. Yamanouchi, T. Fukaya, Hidehiko Okubo, S. Inoue
{"title":"Camera Parameter Estimation Method Using Infrared Markers for Live TV Production","authors":"H. Mitsumine, Y. Yamanouchi, T. Fukaya, Hidehiko Okubo, S. Inoue","doi":"10.1109/VR.2008.4480798","DOIUrl":"https://doi.org/10.1109/VR.2008.4480798","url":null,"abstract":"We have developed a robust method for estimating camera parameters for live TV production based on infrared markers whose feature-point extraction is simple and two-dimensional color histogram matching that considers the effects of specular reflection due to lighting. We first explain the principle of the proposed technique. We then present the results of a basic experiment for evaluating the accuracy of marker identification and the results of an image-compositing experiment using estimated camera parameters. We show that the proposed technique is effective based on those results.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115192753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Creating a Speech Enabled Avatar from a Single Photograph 从一张照片中创建一个语音头像
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480758
D. Bitouk, S. Nayar
{"title":"Creating a Speech Enabled Avatar from a Single Photograph","authors":"D. Bitouk, S. Nayar","doi":"10.1109/VR.2008.4480758","DOIUrl":"https://doi.org/10.1109/VR.2008.4480758","url":null,"abstract":"This paper presents a complete framework for creating a speech-enabled avatar from a single image of a person. Our approach uses a generic facial motion model which represents deformations of a prototype face during speech. We have developed an HMM-based facial animation algorithm which takes into account both lexical stress and coarticulation. This algorithm produces realistic animations of the prototype facial surface from either text or speech. The generic facial motion model can be transformed to a novel face geometry using a set of corresponding points between the prototype face surface and the novel face. Given a face photograph, a small number of manually selected features in the photograph are used to deform the prototype face surface. The deformed surface is then used to animate the face in the photograph. We show several examples of avatars that are driven by text and speech inputs.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114540425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Advanced Multi-Frame Rate Rendering Techniques 先进的多帧率渲染技术
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480770
J. P. Springer, C. Lux, D. Reiners, B. Fröhlich
{"title":"Advanced Multi-Frame Rate Rendering Techniques","authors":"J. P. Springer, C. Lux, D. Reiners, B. Fröhlich","doi":"10.1109/VR.2008.4480770","DOIUrl":"https://doi.org/10.1109/VR.2008.4480770","url":null,"abstract":"Multi-frame rate rendering is a parallel rendering technique that renders interactive parts of the scene on one graphics card while the rest of the scene is rendered asynchronously on a second graphics card. The resulting color and depth images of both render processes are composited and displayed. This paper presents advanced multi-frame rate rendering techniques, which remove limitations of the original approach and reduce artifacts. The interactive manipulation of light sources and their parameters affects the entire scene. Our multi-GPU deferred shading splits the rendering task into a rasterization and lighting pass and distributes the passes to the appropriate graphics card to enable light manipulations at high frame rates independent of the geometry complexity of the scene. We also developed a parallel volume rendering technique, which allows the manipulation of objects inside a translucent volume at high frame rates. Due to the asynchronous nature of multi-frame rate rendering artifacts may occur during the migration of objects from the slow to the fast graphics card, and vice versa. We show how proper state management can be used to avoid these artifacts almost completely. These techniques were developed in the context of a single-system multi-GPU setup, which considerably simplifies the implementation and increases performance.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"417 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117314875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Identifying Motion Capture Tracking Markers with Self-Organizing Maps 用自组织地图识别动作捕捉跟踪标记
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480809
Matthias Weber, H. B. Amor, T. Alexander
{"title":"Identifying Motion Capture Tracking Markers with Self-Organizing Maps","authors":"Matthias Weber, H. B. Amor, T. Alexander","doi":"10.1109/VR.2008.4480809","DOIUrl":"https://doi.org/10.1109/VR.2008.4480809","url":null,"abstract":"Motion capture (MoCap) describes methods and technologies for the detection and measurement of human motion in all its intricacies. Most systems use markers to track points on a body. Especially with natural human motion data captured with passive systems (to not hinder the participant) deficiencies like low accuracy of tracked points or even occluded markers might occur. Additionally, such MoCap data is often unlabeled. In consequence, the system does not provide information about which body landmarks the points belong to. Self-organizing neural networks, especially self- organizing maps (SOMs), are capable of dealing with such problems. This work describes a method to model, initialize and train such SOMs to track and label potentially noisy motion capture data.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130141719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信