2010 IEEE Virtual Reality Conference (VR)最新文献

筛选
英文 中文
Exploiting change blindness to expand walkable space in a virtual environment 利用变化盲目性在虚拟环境中扩展可行走空间
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444752
Evan A. Suma, Seth Clark, Samantha L. Finkelstein, Z. Wartell
{"title":"Exploiting change blindness to expand walkable space in a virtual environment","authors":"Evan A. Suma, Seth Clark, Samantha L. Finkelstein, Z. Wartell","doi":"10.1109/VR.2010.5444752","DOIUrl":"https://doi.org/10.1109/VR.2010.5444752","url":null,"abstract":"We present a technique for exploiting change blindness to allow the user to walk through an immersive virtual environment that is much larger than the available physical workspace. This approach relies on subtle manipulations to the geometry of a dynamic environment model to redirect the user's walking path without becoming noticeable. We describe a virtual environment which was implemented both as a proof-of-concept and a test case for future evaluation. Anecdotal evidence from our informal tests suggest a compelling illusion, though a formal study against existing methods is required to evaluate the usefulness of this technique.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123180048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Virtual Experience Test: A virtual environment evaluation questionnaire 虚拟体验测试:一份虚拟环境评估问卷
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444804
Dustin B. Chertoff, B. Goldiez, J. Laviola
{"title":"Virtual Experience Test: A virtual environment evaluation questionnaire","authors":"Dustin B. Chertoff, B. Goldiez, J. Laviola","doi":"10.1109/VR.2010.5444804","DOIUrl":"https://doi.org/10.1109/VR.2010.5444804","url":null,"abstract":"We present the development and evaluation of the Virtual Experience Test (VET). The VET is a survey instrument used to measure holistic virtual environment experiences based upon the five dimensions of experiential design: sensory, cognitive, affective, active, and relational. Experiential Design (ED) is a holistic approach to enhance presence in virtual environments that goes beyond existing presence theory (i.e. a focus on the sensory aspects of VE experiences) to include affective and cognitive factors. To evaluate the VET, 62 participants played the commercial video game Mirror's Edge. After gameplay both the VET and the ITC-Sense of Presence Inventory (ITC-SOPI) were administered. A principal component analysis was performed on the VET and it was determined that the actual question clustering coincided with the proposed dimensions of experiential design. Furthermore, scores from the VET were shown to have a significant relationship with presence scores on the ITC-SOPI. The results of this research produced a validated measure of holistic experience that could be used to evaluate virtual environments. Furthermore, our experiment indicates that virtual environments utilizing holistic designs can result in significantly higher presence.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127098343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
An image-based system for sharing a 3D object by transmitting to remote locations 一种基于图像的系统,用于通过传输到远程位置来共享3D对象
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444766
T. Kikukawa, Y. Kitamura, Tsubasa Ohno, S. Sakurai, Tokuo Yamaguchi, F. Kishino, Y. Kunita, M. Isogai, H. Kimata, N. Matsuura
{"title":"An image-based system for sharing a 3D object by transmitting to remote locations","authors":"T. Kikukawa, Y. Kitamura, Tsubasa Ohno, S. Sakurai, Tokuo Yamaguchi, F. Kishino, Y. Kunita, M. Isogai, H. Kimata, N. Matsuura","doi":"10.1109/VR.2010.5444766","DOIUrl":"https://doi.org/10.1109/VR.2010.5444766","url":null,"abstract":"We propose a system that allows multiple groups of users at remote locations to naturally share a 3D image of real objects. All users can interactively observe a 3D stereoscopic image without distortion from their own viewpoints. The system basically consists of a combination of systems: imaging and display. The imaging system captures the real object's images from certain number of viewpoints which are used to generate stereoscopic images from arbitrary viewpoints based on an image-based rendering technique implemented on GPU. The display system interactively generates a 3D virtual image of the real object based on the IllusionHole technique. Users at one place just put the real object on their imaging system to capture a set of its images from sparse viewpoints around it; other groups of multiple users at remote places connected by networks observe its virtual image from arbitrary viewpoints on their display systems.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122420068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Virtually augmenting hundreds of real pictures: An approach based on learning, retrieval, and tracking 虚拟增强数百张真实图片:一种基于学习、检索和跟踪的方法
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444811
Julien Pilet, H. Saito
{"title":"Virtually augmenting hundreds of real pictures: An approach based on learning, retrieval, and tracking","authors":"Julien Pilet, H. Saito","doi":"10.1109/VR.2010.5444811","DOIUrl":"https://doi.org/10.1109/VR.2010.5444811","url":null,"abstract":"Tracking is a major issue of virtual and augmented reality applications. Single object tracking on monocular video streams is fairly well understood. However, when it comes to multiple objects, existing methods lack scalability and can recognize only a limited number of objects. Thanks to recent progress in feature matching, state-of-the-art image retrieval techniques can deal with millions of images. However, these methods do not focus on real-time video processing and can not track retrieved objects. In this paper, we present a method that combines the speed and accuracy of tracking with the scalability of image retrieval. At the heart of our approach is a bi-layer clustering process that allows our system to index and retrieve objects based on tracks of features, thereby effectively summarizing the information available on multiple video frames. As a result, our system is able to track in real-time multiple objects, recognized with low delay from a database of more than 300 entries.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116622570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Twinkle: Interacting with physical surfaces using handheld projector 闪烁:使用手持投影仪与物理表面进行交互
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444809
Takumi Yoshida, Y. Hirobe, Hideaki Nii, N. Kawakami, S. Tachi
{"title":"Twinkle: Interacting with physical surfaces using handheld projector","authors":"Takumi Yoshida, Y. Hirobe, Hideaki Nii, N. Kawakami, S. Tachi","doi":"10.1109/VR.2010.5444809","DOIUrl":"https://doi.org/10.1109/VR.2010.5444809","url":null,"abstract":"We propose a novel interface called Twinkle for interacting with an arbitrary physical surface using a handheld projector and a camera. When a user flashes a projection light on an object, the projected images react as if the user touched the object with the light. The handheld device recognizes the features of the physical environment and displays images and sounds that are generated in real-time according to the user's motion and collisions of projected images with objects. We realize this system by using several image-processing techniques and a collision detection algorithm. We also use an acceleration sensor to compensate for the image processing. In this paper, we explain the principle of interacting with a physical surface. Then, we describe the implementation of the prototype system and some application examples.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116850890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Is the rubber hand illusion induced by immersive virtual reality? 橡胶手错觉是沉浸式虚拟现实诱发的吗?
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444807
Ye Yuan, A. Steed
{"title":"Is the rubber hand illusion induced by immersive virtual reality?","authors":"Ye Yuan, A. Steed","doi":"10.1109/VR.2010.5444807","DOIUrl":"https://doi.org/10.1109/VR.2010.5444807","url":null,"abstract":"The rubber hand illusion is a simple illusion where participants can be induced to report and behave as if a rubber hand is part of their body. The induction is usually done by an experimenter tapping both a rubber hand prop and the participant's real hand: the touch and visual feedback of the taps must be synchronous and aligned to some extent. The illusion is usually tested by several means including a physical threat to the rubber hand. The response to the threat can be measured by galvanic skin response (GSR): those that have the illusion showed a marked rise in GSR. Based on our own and reported experiences with immersive virtual reality (IVR), we ask whether a similar illusion is induced naturally within IVR? Does the participant report and behave as if the virtual arm is part of their body? We show that participants in a HMD-based IVR who see a virtual body can experience similar responses to threats as those in comparable rubber hand illusion experiments. We show that these responses can be negated by replacing the virtual body with an abstract cursor representing the hand, and that the responses are stable under some gradual forced distortion of tracker space so that proprioceptive and visual information are not matched.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126337217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 206
Dynamic control of multiple focal-plane projections for eliminating defocus and occlusion 多焦平面投影的动态控制以消除离焦和遮挡
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444758
Momoyo Nagase, D. Iwai, Kosuke Sato
{"title":"Dynamic control of multiple focal-plane projections for eliminating defocus and occlusion","authors":"Momoyo Nagase, D. Iwai, Kosuke Sato","doi":"10.1109/VR.2010.5444758","DOIUrl":"https://doi.org/10.1109/VR.2010.5444758","url":null,"abstract":"This paper presents a novel dynamic control of multiple focal-plane projections. Our approach multiplexes the projectors' focal-planes so that all the displayed images are focused. The projectors are placed at various locations with various directions in the scene so that there is no occlusion of the projection light even for dynamically moving objects. Our approach compensates for defocus blur, oblique blur (loss of high-spatial-frequency components according to the incidence angle of the projection light), and cast shadows of projected images. This is achieved by selecting an optimal projector that can display the finest image for each point of the object surface by comparing the projectors' point spread functions (PSFs). The proposed approach requires geometric calibration to be performed only once in advance. Our approach can compute PSFs without any additional calibrations even when the surface moves if it is tracked by an external sensor. The method is particularly useful in interactive systems in which the object to be projected is tracked.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128064932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Can you help me concentrate room? 你能帮我整理一下房间吗?
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444801
M. N. Adi, D. Roberts
{"title":"Can you help me concentrate room?","authors":"M. N. Adi, D. Roberts","doi":"10.1109/VR.2010.5444801","DOIUrl":"https://doi.org/10.1109/VR.2010.5444801","url":null,"abstract":"Would a room in which the walls appear to come to life, be a pleasant place and would it help or hinder concentration? On the doorstep of life-like architecture we use virtual reality to begin to answer these questions. We are brought to this doorstep through the potential convergence of emerging adaptive, reactive and organic architecture and the approaches that have made life like virtual agents both engaging and useful. The scene is set by introducing the concept of Adaptive Appraisive Architecture, in which a building could appear to exhibit life like appearance and behaviour. While being impressive would such a building be pleasant and useful? Before building the physical structure or coding the artificial intelligence this paper measures the impact of being within a room with moving walls on experience and performance. To do this we gave test subjects two jigsaw puzzles and placed them within a life size simulation where the walls move then remain static. The impact of this difference is measured on experience through questionnaire and post interview. The impact on performance is measured in terms of the amount of the puzzle solved. The relevance of the results are not constrained to adaptive architecture as they add to the body of knowledge that relate distractions to concentration.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121927803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A real-time multi-cue hand tracking algorithm based on computer vision 一种基于计算机视觉的实时多线索手部跟踪算法
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444787
Zhigeng Pan, Yang Li, Mingmin Zhang, Chao-hui Sun, Kangde Guo, Xing Tang, S. Zhou
{"title":"A real-time multi-cue hand tracking algorithm based on computer vision","authors":"Zhigeng Pan, Yang Li, Mingmin Zhang, Chao-hui Sun, Kangde Guo, Xing Tang, S. Zhou","doi":"10.1109/VR.2010.5444787","DOIUrl":"https://doi.org/10.1109/VR.2010.5444787","url":null,"abstract":"Although hand tracking algorithm has been widely used in virtual reality and HCI system, it is still a challenging problem in vision-based research area. Due to the robustness and real-time requirements in VR applications, most hand tracking algorithms require special device to achieve satisfactory results. In this paper, we propose an easy-to-use and inexpensive approach to track the hands accurately with a single normal webcam. Outstretched hand is detected by contour & curvature based detection techniques to initialize the tracking region. Robust multi-cue hand tracking is then achieved by velocity-weighted features and color cue. Experiments show that the proposed multi-cue hand tracking approach achieves continuous real-time results even for the situation of cluttered background. The approach fulfills the speed and accuracy requirements of frontal-view vision-based human computer interactions.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125398280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 72
Evaluating haptic feedback in virtual environments using ISO 9241–9 使用ISO 9241-9评估虚拟环境中的触觉反馈
2010 IEEE Virtual Reality Conference (VR) Pub Date : 2010-03-20 DOI: 10.1109/VR.2010.5444753
Robert J. Teather, Daniel Natapov, M. Jenkin
{"title":"Evaluating haptic feedback in virtual environments using ISO 9241–9","authors":"Robert J. Teather, Daniel Natapov, M. Jenkin","doi":"10.1109/VR.2010.5444753","DOIUrl":"https://doi.org/10.1109/VR.2010.5444753","url":null,"abstract":"The ISO 9241 Part 9 standard [2] pointing task is used to evaluate passive haptic feedback in target selection in a virtual environment (VE). Participants performed a tapping task using a tracked stylus in a CAVE both with, and without passive haptic feedback provided by a plastic panel co-located with the targets. Pointing throughput (but not speed nor accuracy alone) was significantly higher with haptic feedback than without it, confirming previous results using an alternative experimental paradigm.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129483205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信