2009 IEEE Virtual Reality Conference最新文献

筛选
英文 中文
Spatialized Haptic Rendering: Providing Impact Position Information in 6DOF Haptic Simulations Using Vibrations 空间化触觉渲染:在使用振动的6DOF触觉模拟中提供冲击位置信息
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810990
Jean Sreng, A. Lécuyer, C. Andriot, B. Arnaldi
{"title":"Spatialized Haptic Rendering: Providing Impact Position Information in 6DOF Haptic Simulations Using Vibrations","authors":"Jean Sreng, A. Lécuyer, C. Andriot, B. Arnaldi","doi":"10.1109/VR.2009.4810990","DOIUrl":"https://doi.org/10.1109/VR.2009.4810990","url":null,"abstract":"In this paper we introduce a \"Spatialized Haptic Rendering\" technique to enhance 6DOF haptic manipulation of virtual objects with impact position information using vibrations. This rendering technique uses our perceptive ability to determine the contact position by using the vibrations generated by the impact. In particular, the different vibrations generated by a beam are used to convey the impact position information. We present two experiments conducted to tune and evaluate our spatialized haptic rendering technique. The first experiment investigates the vibration parameters (amplitudes/frequencies) needed to enable an efficient discrimination of the force patterns used for spatialized haptic rendering. The second experiment is an evaluation of spatialized haptic rendering during 6DOF manipulation. Taken together, the results suggest that spatialized haptic rendering can be used to improve the haptic perception of impact position in complex 6DOF interactions.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115285457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A virtual reality claustrophobia therapy system - implementation and test 虚拟现实幽闭恐惧症治疗系统的实现与测试
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811020
M. Bruce, H. Regenbrecht
{"title":"A virtual reality claustrophobia therapy system - implementation and test","authors":"M. Bruce, H. Regenbrecht","doi":"10.1109/VR.2009.4811020","DOIUrl":"https://doi.org/10.1109/VR.2009.4811020","url":null,"abstract":"Virtual reality exposure therapy (VRET) is becoming an increasing commonplace technique for the treatment of a wide range of psychological disorders, such as phobias. Effective virtual reality systems are suggested to invoke presence, which in term elicits an emotional response, helping to lead a successful treatment outcome. However, a number of problems are apparent: (1) the expense of traditional virtual reality systems hampers their widespread adoption; (2) the depth of research into several disorders is still limited in depth; and (3) the understanding of presence and its relation to delivery mechanism and treatment outcome is still not entirely understood. We implemented and experimentally investigated an immersive VRET prototype system for the treatment of claustrophobia, a system that combines affordability, robustness and practicality while providing presence and effectiveness in treatment. The prototype system was heuristically evaluated and a controlled treatment scenario experiment using a non-clinical sample was performed. In the following, we describe the background, system concept and implementation, the tests and future directions.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127151288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Evaluating the Influence of Haptic Force-Feedback on 3D Selection Tasks using Natural Egocentric Gestures 评估触觉力反馈对使用自然自我中心手势的3D选择任务的影响
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810992
V. Pawar, A. Steed
{"title":"Evaluating the Influence of Haptic Force-Feedback on 3D Selection Tasks using Natural Egocentric Gestures","authors":"V. Pawar, A. Steed","doi":"10.1109/VR.2009.4810992","DOIUrl":"https://doi.org/10.1109/VR.2009.4810992","url":null,"abstract":"Immersive Virtual Environments (IVEs) allow participants to interact with their 3D surroundings using natural hand gestures. Previous work shows that the addition of haptic feedback cues improves performance on certain 3D tasks. However, we believe this is not true for all situations. Depending on the difficulty of the task, we suggest that we should expect differences in the ballistic movement of our hands when presented with different types of haptic force-feedback conditions. We investigated how hard, soft and no haptic force-feedback responses, experienced when in contact with the surface of an object, affected user performance on a task involving selection of multiple targets. To do this, we implemented a natural egocentric selection interaction technique by integrating a two-handed large-scale force-feedback device in to a CAVETM-like IVE system. With this, we performed a user study where we show that participants perform selection tasks best when interacting with targets that exert soft haptic force-feedback cues. For targets that have hard and no force-feedback properties, we highlight certain associated hand movement that participants make under these conditions, that we hypothesise reduce their performance.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"68 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128723394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Image Blending and View Clustering for Multi-Viewer Immersive Projection Environments 多观察者沉浸式投影环境的图像混合和视图聚类
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810998
J. Marbach
{"title":"Image Blending and View Clustering for Multi-Viewer Immersive Projection Environments","authors":"J. Marbach","doi":"10.1109/VR.2009.4810998","DOIUrl":"https://doi.org/10.1109/VR.2009.4810998","url":null,"abstract":"Investment into multi-wall Immersive Virtual Environments is often motivated by the potential for small groups of users to work collaboratively, yet most systems only allow for stereographic rendering from a single viewpoint. This paper discusses approaches for supporting copresent head-tracked users in an immersive projection environment, such as the CAVETM, without relying on additional projection and frame-multiplexing technology. The primary technique presented here is called Image Blending and consists of rendering independent views for each head-tracked user to an off-screen buffer and blending the images into a final composite view using view-vector incidence angles as weighting factors. Additionally, users whose view-vectors intersect a projection screen at similar locations are grouped into a view-cluster. Clustered user views are rendered from the average head position and orientation of all users in that cluster. The clustering approach minimizes users' exposure to undesirable display artifacts such as inverted stereo pairs and nonlinear object projections by distributing projection error over all tracked viewers. These techniques have the added advantage that they can be easily integrated into existing systems with minimally increased hardware and software requirements. We compare Image Blending and View Clustering with previously published techniques and discuss possible implementation optimizations and their tradeoffs.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121370642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
DiVE into Alcohol: A Biochemical Immersive Experience 潜入酒精:生化沉浸式体验
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811055
Marcel Yang, D. McMullen, R. Schwartz-Bloom, R. Brady
{"title":"DiVE into Alcohol: A Biochemical Immersive Experience","authors":"Marcel Yang, D. McMullen, R. Schwartz-Bloom, R. Brady","doi":"10.1109/VR.2009.4811055","DOIUrl":"https://doi.org/10.1109/VR.2009.4811055","url":null,"abstract":"We present DiVE into Alcohol, a virtual reality (VR) program that can be used in chemistry education at the high school and college level, both as an immersive experience, or as a web-based program. The program is presented in the context of an engaging topic--the oxidation of alcohol based on genetic differences in metabolism within the liver cell. The user follows alcohol molecules through the body to the liver and into the enzyme active site where the alcohol is oxidized. A gaming format allows the user to choose molecules and orient them in 3D space to enable the oxidation reaction. Interactivity also includes choices of different forms of the enzyme based on the genetically-coded structure and rates of oxidation that lead to intoxication vs sickness. DiVE into Alcohol was constructed with the use of a variety of software that provide enzyme structure (¿ pdb¿ files, MolProbity, 3D Kinemage), modeling (Autodesk Maya¿), and VR technology (3DVIA VirTools¿) .","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117337187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Scalable Vision-based Gesture Interaction for Cluster-driven High Resolution Display Systems 集群驱动的高分辨率显示系统中基于视觉的可伸缩手势交互
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811030
Xun Luo, R. Kenyon
{"title":"Scalable Vision-based Gesture Interaction for Cluster-driven High Resolution Display Systems","authors":"Xun Luo, R. Kenyon","doi":"10.1109/VR.2009.4811030","DOIUrl":"https://doi.org/10.1109/VR.2009.4811030","url":null,"abstract":"We present a coordinated ensemble of scalable computing techniques to accelerate a number of key tasks needed for vision-based gesture interaction, by using the cluster driving a large display system. A hybrid strategy that partitions the scanning task of a frame image by both region and scale is proposed. Based on this hybrid strategy, a novel data structure called a scanning tree is designed to organize the computing nodes. The level of effectiveness of the proposed solution was tested by incorporating it into a gesture interface controlling a ultra-high-resolution tiled display wall.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133626018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling 基于注视-头部耦合建模的自然眼动合成
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811014
Xiaohan Ma, Z. Deng
{"title":"Natural Eye Motion Synthesis by Modeling Gaze-Head Coupling","authors":"Xiaohan Ma, Z. Deng","doi":"10.1109/VR.2009.4811014","DOIUrl":"https://doi.org/10.1109/VR.2009.4811014","url":null,"abstract":"Due to the intrinsic subtlety and dynamics of eye movements, automated generation of natural and engaging eye motion has been a challenging task for decades. In this paper we present an effective technique to synthesize natural eye gazes given a head motion sequence as input, by statistically modeling the innate coupling between gazes and head movements. We first simultaneously recorded head motions and eye gazes of human subjects, using a novel hybrid data acquisition solution consisting of an optical motion capture system and off-the-shelf video cameras. Then, we statistically learn gaze-head coupling patterns using a dynamic coupled component analysis model. Finally, given a head motion sequence as input, we can synthesize its corresponding natural eye gazes based on the constructed gaze-head coupling model. Through comparative user studies and evaluations, we found that comparing with the state of the art algorithms in eye motion synthesis, our approach is more effective to generate natural gazes correlated with given head motions. We also showed the effectiveness of our approach for gaze simulation in two-party conversations.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115542430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Real-time Volumetric Reconstruction and Tracking of Hands and Face as a User Interface for Virtual Environments 手和脸的实时体积重建和跟踪作为虚拟环境的用户界面
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811035
C. John, Ulrich Schwanecke, H. Regenbrecht
{"title":"Real-time Volumetric Reconstruction and Tracking of Hands and Face as a User Interface for Virtual Environments","authors":"C. John, Ulrich Schwanecke, H. Regenbrecht","doi":"10.1109/VR.2009.4811035","DOIUrl":"https://doi.org/10.1109/VR.2009.4811035","url":null,"abstract":"Enhancing desk-based computer environments with virtual reality technology requires natural interaction support, in particular head and hand tracking. Todays motion capture systems instrument users with intrusive hardware like optical markers or data gloves which limit the perceived realism of interactions with a virtual environment and constrain the free moving space of operators. Our work therefore focuses on the development of fault-tolerant techniques for fast, non-contact 3D hand motion capture, targeted to the application in standard office environments. This paper presents a table-top setup which utilizes vision based volumetric reconstruction and tracking of skin colored objects to integrate the users hands and face into virtual environments. The system is based on off-the-shelf hardware components and satisfies real-time constraints.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117290207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Improving Spatial Perception for Augmented Reality X-Ray Vision 增强现实x射线视觉的空间感知改进
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811002
Ben Avery, C. Sandor, B. Thomas
{"title":"Improving Spatial Perception for Augmented Reality X-Ray Vision","authors":"Ben Avery, C. Sandor, B. Thomas","doi":"10.1109/VR.2009.4811002","DOIUrl":"https://doi.org/10.1109/VR.2009.4811002","url":null,"abstract":"Augmented reality x-ray vision allows users to see through walls and view real occluded objects and locations. We present an augmented reality x-ray vision system that employs multiple view modes to support new visualizations that provide depth cues and spatial awareness to users. The edge overlay visualization provides depth cues to make hidden objects appear to be behind walls, rather than floating in front of them. Utilizing this edge overlay, the tunnel cut-out visualization provides details about occluding layers between the user and remote location. Inherent limitations of these visualizations are addressed by our addition of view modes allowing the user to obtain additional detail by zooming in, or an overview of the environment via an overhead exocentric view.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129585751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 130
A Software Architecture for Sharing Distributed Virtual Worlds 共享分布式虚拟世界的软件体系结构
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811050
F. Drolet, M. Mokhtari, François Bernier, D. Laurendeau
{"title":"A Software Architecture for Sharing Distributed Virtual Worlds","authors":"F. Drolet, M. Mokhtari, François Bernier, D. Laurendeau","doi":"10.1109/VR.2009.4811050","DOIUrl":"https://doi.org/10.1109/VR.2009.4811050","url":null,"abstract":"This paper presents a generic software architecture developed to allow users located at different physical locations to share the same virtual environment and to interact with each other and the environment in a coherent and transparent manner.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127828605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信