2009 IEEE Virtual Reality Conference最新文献

筛选
英文 中文
Demonstration of Improved Olfactory Display using Rapidly-Switching Solenoid Valves 使用快速开关电磁阀改进嗅觉显示的演示
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811065
T. Nakamoto, M. Kinoshita, Keisuke Murakami, Y. Ariyakul
{"title":"Demonstration of Improved Olfactory Display using Rapidly-Switching Solenoid Valves","authors":"T. Nakamoto, M. Kinoshita, Keisuke Murakami, Y. Ariyakul","doi":"10.1109/VR.2009.4811065","DOIUrl":"https://doi.org/10.1109/VR.2009.4811065","url":null,"abstract":"This article shows our research efforts toward the development of an olfactory display system for presenting odors with a vivid sense of reality. A multi-component olfactory display is developed, which enables generation of a variety of odors by blending multiple odor vapors with arbitrary ratios. In this research demo, the two contents using the olfactory display will be demonstrated. The first one is the experiment on odor approximation using odor components extracted from a mass spectrum database. The second one is the reproduction of video with smell. The temporal changes of odor kind and its concentration can be recorded and can be presented together with movie obtained from a web camera. People can enjoy the recorded video with scents.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122324201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality 移动增强现实的室内与室外深度感知
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4810999
M. Livingston, Zhuming Ai, J. Swan, H. Smallman
{"title":"Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality","authors":"M. Livingston, Zhuming Ai, J. Swan, H. Smallman","doi":"10.1109/VR.2009.4810999","DOIUrl":"https://doi.org/10.1109/VR.2009.4810999","url":null,"abstract":"We tested users' depth perception of virtual objects in our mobile augmented reality (AR) system in both indoor and outdoor environments using a depth matching task. The indoor environment is characterized by strong linear perspective cues; we attempted to re-create these cues in the outdoor environment. In the indoor environment, we found an overall pattern of underestimation of depth that is typical for virtual environments and AR systems. However, in the outdoor environment, we found that subjects overestimated depth. In addition, our synthetic linear perspective cues met with a measure of success, leading users to reduce their estimate of the depth of distant objects. We describe the experimental procedure, analyze the data, present the results of the study, and discuss the implications for mobile, outdoor AR systems.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126769363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Virtual Humans That Touch Back: Enhancing Nonverbal Communication with Virtual Humans through Bidirectional Touch 触碰的虚拟人:通过双向触碰增强与虚拟人的非语言交流
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811019
Aaron Kotranza, Benjamin C. Lok, C. Pugh, D. Lind
{"title":"Virtual Humans That Touch Back: Enhancing Nonverbal Communication with Virtual Humans through Bidirectional Touch","authors":"Aaron Kotranza, Benjamin C. Lok, C. Pugh, D. Lind","doi":"10.1109/VR.2009.4811019","DOIUrl":"https://doi.org/10.1109/VR.2009.4811019","url":null,"abstract":"Touch is a powerful component of human communication, yet has been largely absent in communication between humans and virtual humans (VHs). This paper expands on recent work which allowed unidirectional touch from human to VH, by evaluating bidirectional touch as a new channel for nonverbal communication. A VH augmented with a haptic interface is able to touch her interaction partner using a pseudo-haptic touch or an active-haptic touch from a co-located mechanical arm. Within the context of a simulated doctor-patient interaction, two user studies (n = 54) investigate how touch can be used by both human and VH to communicate. Results show that human-to-VH touch is used for the same communication purposes as human-to-human touch, and that VH-to-human touch (pseudo-haptic and active-haptic) allows the VH to communicate with its human interaction partner. The enhanced nonverbal communication provided by bidirectional touch has the potential to solve difficult problems in VH research, such as disambiguating user speech, enforcing social norms, and achieving rapport with VHs.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116128006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together 远距离交流眼神:比较眼神支持的沉浸式协作虚拟环境、对齐视频会议和在一起
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811013
D. Roberts, R. Wolff, John P Rae, A. Steed, R. Aspin, Moira McIntyre, Adriana Pena, Oyewole Oyekoya, W. Steptoe
{"title":"Communicating Eye-gaze Across a Distance: Comparing an Eye-gaze enabled Immersive Collaborative Virtual Environment, Aligned Video Conferencing, and Being Together","authors":"D. Roberts, R. Wolff, John P Rae, A. Steed, R. Aspin, Moira McIntyre, Adriana Pena, Oyewole Oyekoya, W. Steptoe","doi":"10.1109/VR.2009.4811013","DOIUrl":"https://doi.org/10.1109/VR.2009.4811013","url":null,"abstract":"Eye gaze is an important and widely studied non-verbal resource in co-located social interaction. When we attempt to support tele-presence between people, there are two main technologies that can be used today: video-conferencing (VC) and collaborative virtual environments (CVEs). In VC, one can observe eye-gaze behaviour but practically the targets of eye-gaze are only correct if the participants remain relatively still. We attempt to support eye-gaze behaviour in an unconstrained manner by integrating eye-trackers into an Immersive CVE (ICVE) system. This paper aims to show that while both ICVE and VC allow people to discern being looked at and what else is looked at, when someone gazes into their space from another location, ICVE alone can continue to do this as people move. The conditions of aligned VC, ICVE, eye-gaze enabled ICVE and co-location are compared. The impact of factors of alignment, lighting, resolution, and perspective distortion are minimised through a set of pilot experiments, before a formal experiment records results for optimal settings. Results show that both VC and ICVE support eye-gaze in constrained situations, but only ICVE supports movement of the observer. We quantify the mis-judgements that are made and discuss how our findings might inform research into supporting eye-gaze through interpolated free viewpoint video based methods.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125104817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
iPhone/iPod Touch as Input Devices for Navigation in Immersive Virtual Environments iPhone/iPod Touch作为沉浸式虚拟环境中的导航输入设备
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811045
Ji-Sun Kim, D. Gračanin, K. Matkovič, Francis K. H. Quek
{"title":"iPhone/iPod Touch as Input Devices for Navigation in Immersive Virtual Environments","authors":"Ji-Sun Kim, D. Gračanin, K. Matkovič, Francis K. H. Quek","doi":"10.1109/VR.2009.4811045","DOIUrl":"https://doi.org/10.1109/VR.2009.4811045","url":null,"abstract":"iPhone and iPod Touch are multi-touch handheld devices that provide new possibilities for interaction techniques. We describe iPhone/iPod Touch implementation of a navigation interaction technique originally developed for a larger multi-touch device (i.e. Lemur). The interaction technique implemented on an iPhone/iPod Touch was used for navigation tasks in a CAVE virtual environment. We performed a pilot study to measure the control accuracy and to observe how human subjects respond to the interaction technique on the iPhone and iPod Touch devices. We used the preliminary results to improve the design of the interaction technique.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125681266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Composable Volumetric Lenses for Surface Exploration 用于表面探测的可组合体积透镜
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811060
Jan-Phillip Tiesel, C. Borst, Kaushik Das, G. Kinsland, Christopher M. Best, Vijay B. Baiyya
{"title":"Composable Volumetric Lenses for Surface Exploration","authors":"Jan-Phillip Tiesel, C. Borst, Kaushik Das, G. Kinsland, Christopher M. Best, Vijay B. Baiyya","doi":"10.1109/VR.2009.4811060","DOIUrl":"https://doi.org/10.1109/VR.2009.4811060","url":null,"abstract":"We demonstrate composable volumetric lenses as interpretational tools for geological visualization. The lenses provide a constrained focus region that provides alternative views of datasets to the user while maintaining the context of surrounding features. Our rendering method is based on run-time composition of GPU shader programs that implement per-fragment clipping to lens boundaries and surface shader evaluation. It supports composition of lenses and the user can influence the resulting visualization by interactively changing the order in which the individual lens effects are applied. Multiple shader effects have been created and used for interpretation of high-resolution elevation datasets (like LTDAR and SRTM) in our lab.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133827341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Creating Virtual 3D See-Through Experiences on Large-size 2D Displays 在大尺寸2D显示器上创建虚拟3D透视体验
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811033
Chang Yuan
{"title":"Creating Virtual 3D See-Through Experiences on Large-size 2D Displays","authors":"Chang Yuan","doi":"10.1109/VR.2009.4811033","DOIUrl":"https://doi.org/10.1109/VR.2009.4811033","url":null,"abstract":"This paper describes a novel approach for creating virtual 3D see-through experiences on large-size 2D displays. The approach aims at simulating the real-life experience of observing an outside world through one or multiple windows. Here, the display screen serves as a virtual window such that viewers moving in front of the display can observe different part of the scene which is also moving in the opposite direction. In order to generate this see-through experience, a virtual 3D scene is first created and placed behind the display. Then, the viewers' 3D positions and motion are tracked by a 3D viewer tracking method based on multiple infra-red cameras. The virtual scene is rendered by 3D graphics engines in real-time speed, based on the tracked viewer positions. A prototype system has been implemented on a large-size tiled display and was able to bring viewers a realistic and natural 3D see-through experience.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"2011 30","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114087415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Game Theoretic Approach for Modeling User-System Interaction in Networked Virtual Environments 网络虚拟环境中用户系统交互建模的博弈论方法
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811053
Shaimaa Y. Lazem, D. Gračanin, Ayman A. Abdel-Hamid
{"title":"A Game Theoretic Approach for Modeling User-System Interaction in Networked Virtual Environments","authors":"Shaimaa Y. Lazem, D. Gračanin, Ayman A. Abdel-Hamid","doi":"10.1109/VR.2009.4811053","DOIUrl":"https://doi.org/10.1109/VR.2009.4811053","url":null,"abstract":"Networked Virtual Environments (NVEs) are distributed 3D simulations shared among geographically dispersed users. Adaptive resource allocation is a key issue in NVEs, since user interactions affect system resources which in turn affect the user's experience. Such interplay between the users and the system can be modeled using Game Theory. Game Theory is an analytical tool that studies decision-making between interacting agents (players), where player decisions impact greatly other players. We propose a basic structure for a Game Theory model that describes the interaction between the users and the system in NVEs based on an exploratory study of mobile virtual environments.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"791 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114095698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hybrid Rendering in a Multi-framework VR System 多框架VR系统中的混合渲染
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811046
G. Marino, F. Tecchia, D. Vercelli, M. Bergamasco
{"title":"Hybrid Rendering in a Multi-framework VR System","authors":"G. Marino, F. Tecchia, D. Vercelli, M. Bergamasco","doi":"10.1109/VR.2009.4811046","DOIUrl":"https://doi.org/10.1109/VR.2009.4811046","url":null,"abstract":"The work addresses the topic of software integration in the context of complex Virtual Reality applications. We propose a novel method to render in a single graphical context objects handled by separate programs based on different VR frameworks, even when the various applications are running on different machines. Our technique is based on a network rendering architecture, where both origin and destination applications communicate through a set of protocols that deliver compressed graphical instructions and keep the machines synchronized. Existing applications require minimal changes in order to be compatible with our system; in particular we tested the validity of our approach on a number of rendering frameworks (Ogre3D, OpenSceneGraph, Torque, XVR). We believe that our technique can solve many integration burdens typically encountered in practical situations.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129365980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multimodal Interface for Artifact's Exploration 工件探索的多模态界面
2009 IEEE Virtual Reality Conference Pub Date : 2009-03-14 DOI: 10.1109/VR.2009.4811054
P. Figueroa, J. Borda, Diego Restrepo, P. Boulanger, Eduardo Londoño, F. Prieto
{"title":"A Multimodal Interface for Artifact's Exploration","authors":"P. Figueroa, J. Borda, Diego Restrepo, P. Boulanger, Eduardo Londoño, F. Prieto","doi":"10.1109/VR.2009.4811054","DOIUrl":"https://doi.org/10.1109/VR.2009.4811054","url":null,"abstract":"We present an integrated interface that takes advantage of several VR technologies for the exploration of small artifacts in a Museum. This interface allows visitors to observe pieces in 3D from several viewpoints, touch them, feel their weight, and hear their sound at touch. From one hand, it will allow visitors to observe artifacts closer, in order to know more about selected pieces and to enhance their overall experience in a Museum. From the other hand, it lever-ages existing technologies in order to provide a multimodal interface, which can be easily replaced for others depending on costs or functionality. We describe some of the early results and experiences we provide in this setup.","PeriodicalId":433266,"journal":{"name":"2009 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130891939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信