2008 IEEE Virtual Reality Conference最新文献

筛选
英文 中文
Capturing Images with Sparse Informational Pixels using Projected 3D Tags 使用投影3D标签捕获具有稀疏信息像素的图像
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480744
Li Zhang, N. Subramaniam, Robert Lin, R. Raskar, S. Nayar
{"title":"Capturing Images with Sparse Informational Pixels using Projected 3D Tags","authors":"Li Zhang, N. Subramaniam, Robert Lin, R. Raskar, S. Nayar","doi":"10.1109/VR.2008.4480744","DOIUrl":"https://doi.org/10.1109/VR.2008.4480744","url":null,"abstract":"In this paper, we propose a novel imaging system that enables the capture of photos and videos with sparse informational pixels. Our system is based on the projection and detection of 3D optical tags. We use an infrared (IR) projector to project temporally-coded (blinking) dots onto selected points in a scene. These tags are invisible to the human eye, but appear as clearly visible time-varying codes to an IR photosensor. As a proof of concept, we have built a prototype camera system (consisting of co-located visible and IR sensors) to simultaneously capture visible and IR images. When a user takes an image of a tagged scene using such a camera system, all the scene tags that are visible from the system's viewpoint are detected. In addition, tags that lie in the field of view but are occluded, and ones that lie just outside the field of view, are also automatically generated for the image. Associated with each tagged pixel is its 3D location and the identity of the object that the tag falls on. Our system can interface with conventional image recognition methods for efficient scene authoring, enabling objects in an image to be robustly identified using cheap cameras, minimal computations, and no domain knowledge. We demonstrate several applications of our system, including, photo-browsing, e-commerce, augmented reality, and objection localization.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130297894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The Two-User Seating Buck: Enabling Face-to-Face Discussions of Novel Car Interface Concepts 双用户座位Buck:使面对面的讨论新颖的汽车界面概念
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480754
Holger Salzmann, B. Fröhlich
{"title":"The Two-User Seating Buck: Enabling Face-to-Face Discussions of Novel Car Interface Concepts","authors":"Holger Salzmann, B. Fröhlich","doi":"10.1109/VR.2008.4480754","DOIUrl":"https://doi.org/10.1109/VR.2008.4480754","url":null,"abstract":"The automotive industry uses physical seating bucks, which are minimal mockups of a car interior, to assess various aspects of the planned interior early in the development process. In a virtual seating buck, users wear a head-mounted display (HMD) which overlays a virtual car interior on a physical seating buck. We have developed a two-user virtual seating buck system, which allows two users to take the role of the driver and co-driver respectively. Both users wear tracked head-mounted displays and see the virtual car interior from the respective view points enabling them to properly interact with the interface elements of a car. We use this system for the development, test and evaluation of novel human-machine interface concepts for future car models. We provide each user with an avatar, since the two co-located users need to see each others' actions. Our evaluation of different head and hand models for representing the two users indicate that the user representations and motions should be as realistic as possible even though the focus is on testing interface elements operated by the users' fingers. The participants of our study also expressed that they clearly prefer the two-user seating buck over a single-user system since it directly supports the face-to-face discussions of features and problems of a newly developed interface.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116674063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
VARU Framework: Enabling Rapid Prototyping of VR, AR and Ubiquitous Applications VARU框架:实现VR, AR和无处不在的应用程序的快速原型
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480774
S. Irawati, S. Ahn, Jinwook Kim, H. Ko
{"title":"VARU Framework: Enabling Rapid Prototyping of VR, AR and Ubiquitous Applications","authors":"S. Irawati, S. Ahn, Jinwook Kim, H. Ko","doi":"10.1109/VR.2008.4480774","DOIUrl":"https://doi.org/10.1109/VR.2008.4480774","url":null,"abstract":"Recent advanced interface technologies allow the user to interact with different spaces such as virtual reality (VR), augmented reality (AR) and ubiquitous computing (UC) spaces. Previously, human computer interaction (HCI) issues in VR, AR and UC have been largely carried out in separate communities. Here, we combine these three interaction spaces into a single interaction space, called tangible space. We propose the VARU framework which is designed for rapid prototyping of a tangible space application. It is designed to provide extensibility, flexibility and scalability. Depending on the available resources, the user could interact with either the virtual, physical or mixed environment. By having the VR, AR and UC spaces in a single platform, it gives us the possibility to explore different types of collaboration across the different spaces. As a result, we present our prototype application which is built using the VARU framework.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114580022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
E-MAT: The Extremities-Multiple Application Trainer for Haptic-based Medical Training E-MAT:触觉医学训练的四肢多应用培训师
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480796
Todd Lazarus, G. Martin, Razia Nayeem, J. Fowlkes, Dawn Riddle
{"title":"E-MAT: The Extremities-Multiple Application Trainer for Haptic-based Medical Training","authors":"Todd Lazarus, G. Martin, Razia Nayeem, J. Fowlkes, Dawn Riddle","doi":"10.1109/VR.2008.4480796","DOIUrl":"https://doi.org/10.1109/VR.2008.4480796","url":null,"abstract":"Research in medical simulation has existed for many years. However, the incorporation of haptics into such simulations has been increasing in years. Similarly, the use of medical simulation has also been increasing. We present an inexpensive, portable system for training hemorrhage control. While our focus has been on combat medics, the system, known as the extremities-multiple application training (or E-MAT), can apply across all medical fields and support multiple procedures. E-MAT can operate in a stand-alone mode or integrated with a host PC or PDA application.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114663437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
High-Fidelity Avatar Eye-Representation 高保真的化身眼睛表现
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480759
W. Steptoe, A. Steed
{"title":"High-Fidelity Avatar Eye-Representation","authors":"W. Steptoe, A. Steed","doi":"10.1109/VR.2008.4480759","DOIUrl":"https://doi.org/10.1109/VR.2008.4480759","url":null,"abstract":"In collaborative virtual environments, the visual representation of avatars has been shown to be an important determinant of participant behaviour and response. We explored the influence of varying conditions of eye-representation in our high-fidelity avatar by measuring how accurately people can identify the avatar's point-of- regard (direction of gaze), together with subjective authenticity assessments of the avatar's behaviour and visual representation. The first of two variables investigated was socket-deformation, which is to say that our avatar's eyelids, eyebrows and surrounding areas morphed realistically depending on eye-rotation. The second was vergence of our avatar's eyes to the exact point-of-regard. Our results suggest that the two variables significantly influence the accuracy of point-of-regard identification. This accuracy is highly dependent on the combination of viewing-angle and the point-of-regard itself. We found that socket-deformation in particular has a highly positive impact on the perceived authenticity of our avatar's overall appearance, and when judging just the eyes. However, despite favourable subjective ratings, overall performance during the point-of-regard identification task was actually worse with the highest quality avatar. This provides more evidence that as we move forward to using higher fidelity avatars, there will be a tradeoff between supporting realism of representation and supporting the actual communicative task.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126084484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Coordination Policies for Co-located Collaborative Travel 协同旅行的协调政策
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480803
Andreas Simon, Christian Stern
{"title":"Coordination Policies for Co-located Collaborative Travel","authors":"Andreas Simon, Christian Stern","doi":"10.1109/VR.2008.4480803","DOIUrl":"https://doi.org/10.1109/VR.2008.4480803","url":null,"abstract":"To make virtual environment experiences more engaging, we propose to use multiple interaction devices and to support collaborative travel for co-located groups of users. Results of a preliminary pilot study of a collaborative travel task for pairs of users in a projection-based display system suggest that the use of multiple controllers is much more effective and satisfying than swapping and sharing a single device. The study also highlights functional differences between coordination policies for collaborative travel.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"os-21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126760519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Vertex-preserving Cutting of Elastic Objects 弹性物体的顶点保持切割
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480799
M. Nakao, K. Minato, Naoto Kume, S. Mori, S. Tomita
{"title":"Vertex-preserving Cutting of Elastic Objects","authors":"M. Nakao, K. Minato, Naoto Kume, S. Mori, S. Tomita","doi":"10.1109/VR.2008.4480799","DOIUrl":"https://doi.org/10.1109/VR.2008.4480799","url":null,"abstract":"This paper proposes vertex-preserving cutting methods on finite element models for interactive soft tissue simulation. Unlike existing methods, we aim to shape variety of incisions using only initial vertices of tetrahedral meshes. Neither tetrahedral decomposition nor vertex creation is used. The number of vertices is preserved. This avoids increase of computation cost as well as allows fast update of physical status of finite element models. To preserve 3D shape and sharp feature of initial meshes through on-the-fly mesh modification, constraints are introduced to the topological update scheme. In our model, the size of stiffness matrix is constant. Our framework efficiently simulates several varieties of smooth incisions with sufficient quality for surgical simulation, and also achieves interactive performance in complex meshes with thousands of elements.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"423 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123640587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
IPSViz: An After-Action Review Tool for Human-Virtual Human Experiences IPSViz:人类虚拟体验的事后评估工具
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480756
A. Raij, Benjamin C. Lok
{"title":"IPSViz: An After-Action Review Tool for Human-Virtual Human Experiences","authors":"A. Raij, Benjamin C. Lok","doi":"10.1109/VR.2008.4480756","DOIUrl":"https://doi.org/10.1109/VR.2008.4480756","url":null,"abstract":"This paper proposes after-action review (AAR) with human-virtual human (H-VH) experiences. H-VH experiences are seeing increased use in training for real-world, H-H experiences. To improve training, the users of H-VH experiences need to review, evaluate, and get feedback on them. AAR enables users to review their H- VH interaction, evaluate their actions, and receive feedback on how to improve future real-world, H-H experiences. The Interpersonal Scenario Visualizer (IPSViz), an AAR tool for H-VH experiences, is presented. IPSViz allows medical students to review their interactions with VH patients. To enable review, IPSViz generates spatial, temporal, and social visualizations of H- VH interactions. Visualizations are generated by treating the interaction as a set of signals. Interaction signals are captured, logged, and processed to generate visualizations for review, evaluation and feedback. In a study (N=27), reviewing the visualizations helped students become self-aware of their actions with a virtual human and gain insight into how to improve interactions with real humans.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128328188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Psychophysical Influence on Tactual Impression by Mixed-Reality Visual Stimulation 混合现实视觉刺激对触觉印象的心理物理影响
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480793
Akiko Iesaki, Akihiro Somada, Asako Kimura, F. Shibata, H. Tamura
{"title":"Psychophysical Influence on Tactual Impression by Mixed-Reality Visual Stimulation","authors":"Akiko Iesaki, Akihiro Somada, Asako Kimura, F. Shibata, H. Tamura","doi":"10.1109/VR.2008.4480793","DOIUrl":"https://doi.org/10.1109/VR.2008.4480793","url":null,"abstract":"This paper describes the influence of visual stimulation on the tactual sense in a mixed-reality environment; i.e., how a tactual impression of a real object is affected by seeing a superimposed image of a different type of material. If the behavior and the extent of such an influence, a sort of illusion, are investigated in detail, the objects composed of a limited variety of materials can be perceived differently. This would be useful in the field of digital engineering. Therefore, we performed various experiments systematically.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114706087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Evaluation of Reorientation Techniques for Walking in Large Virtual Environments 大型虚拟环境中行走重新定向技术的评价
2008 IEEE Virtual Reality Conference Pub Date : 2008-03-08 DOI: 10.1109/VR.2008.4480761
Tabitha C. Peck, M. Whitton, H. Fuchs
{"title":"Evaluation of Reorientation Techniques for Walking in Large Virtual Environments","authors":"Tabitha C. Peck, M. Whitton, H. Fuchs","doi":"10.1109/VR.2008.4480761","DOIUrl":"https://doi.org/10.1109/VR.2008.4480761","url":null,"abstract":"Virtual environments (VEs) that use a real-walking locomotion interface have typically been restricted in size to the area of the tracked lab space. Techniques proposed to lift this size constraint, enabling real walking in VEs that are larger than the tracked lab space, all require reorientation techniques (ROTs) in the worst-case situation-when a user is close to walking out of the tracked space. We propose a new ROT using distractors-objects in the VE for the user to focus on while the VE rotates and compare our method to current ROTs through two user studies. Our findings show ROTs using distractors were preferred and ranked more natural by users. Users were also less aware of the rotating VE, when ROTs with distractors were used.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133040014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信