IEEE Virtual Reality 2004最新文献

筛选
英文 中文
A user-centered approach on combining realism and interactivity in virtual environments 以用户为中心的方法,在虚拟环境中结合现实性和交互性
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.7
M. Roussou, G. Drettakis, N. Tsingos, A. R. Martinez, Emmanuel Gallo
{"title":"A user-centered approach on combining realism and interactivity in virtual environments","authors":"M. Roussou, G. Drettakis, N. Tsingos, A. R. Martinez, Emmanuel Gallo","doi":"10.1109/VR.2004.7","DOIUrl":"https://doi.org/10.1109/VR.2004.7","url":null,"abstract":"In this paper we describe a project that adopts a user-centered approach in the design of virtual environments (VEs) with enhanced realism and interactivity, guided by real-world applications in the areas of urban planning/architecture and cultural heritage education. In what concerns realism, we introduce an image-based 3D capture process, where realistic models are created from photographs and subsequently displayed in a VR system using a high-quality, view-dependent algorithm. The VE is further enhanced using advanced vegetation and shadow display algorithms as well as 3D sound. A high degree of interactivity is added, allowing users to build and manipulate elements of the VEs according to their needs, as specified through a user task analysis and scenario-based approach which is currently being evaluated. This work is developed as part of the Ell-funded research project CREATE.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117238266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
The application of virtual reality to (chemical engineering) education 虚拟现实技术在化工教学中的应用
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.75
John T. Bell, H. Fogler
{"title":"The application of virtual reality to (chemical engineering) education","authors":"John T. Bell, H. Fogler","doi":"10.1109/VR.2004.75","DOIUrl":"https://doi.org/10.1109/VR.2004.75","url":null,"abstract":"Virtual reality, VR, offers many benefits to technical education, including the delivery of information through multiple active channels, the addressing of different learning styles, and experiential-based learning. This poster presents work performed by the authors to apply VR to engineering education, in three broad project areas: virtual chemical plants, virtual laboratory accidents, and a virtual UIC campus. The first area provides guided exploration of domains otherwise inaccessible, such as the interior of operating reactors and microscopic reaction mechanisms. The second promotes safety by demonstrating the consequences of not following proper lab safety procedures. And the third provides valuable guidance for (foreign) visitors. All programs developed are available on the Web, for free download to any interested parties.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130362283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
Interactive and continuous collision detection for avatars in virtual environments 虚拟环境中虚拟角色的交互式连续碰撞检测
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.46
Stéphane Redon, Young J. Kim, Ming C. Lin, Dinesh Manocha, Jim Templeman
{"title":"Interactive and continuous collision detection for avatars in virtual environments","authors":"Stéphane Redon, Young J. Kim, Ming C. Lin, Dinesh Manocha, Jim Templeman","doi":"10.1109/VR.2004.46","DOIUrl":"https://doi.org/10.1109/VR.2004.46","url":null,"abstract":"We present a fast algorithm for continuous collision detection between a moving avatar and its surrounding virtual environment. We model the avatar as an articulated body using line-skeletons with constant offsets and the virtual environment as a collection of polygonized objects. Given the position and orientation of the avatar at discrete time steps, we use an arbitrary in-between motion to interpolate the path for each link between discrete instances. We bound the swept-space of each link using a swept volume (SV) and compute a bounding volume hierarchy to cull away links that are not in close proximity to the objects in the virtual environment. We generate the SV's of the remaining links and use them to check for possible interferences and estimate the time of collision between the surface of the SV and the objects in the virtual environment. Furthermore, we use graphics hardware to perform collision queries on the dynamically generated swept surfaces. Our overall algorithm requires no precomputation and is applicable to general articulated bodies. We have implemented the algorithm on a 2.4 GHz Pentium IV PC with NVIDIA GeForce FX 5800 graphics card and applied it to an avatar with 16 links, moving in a virtual environment composed of hundreds of thousands of polygons. Our prototype system is able to detect all contacts between the moving avatar and the environment in 1.0 - 30 milliseconds.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114975318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The effect of environment characteristics and user interaction on levels of virtual environment sickness 环境特征和用户交互对虚拟环境病水平的影响
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.76
R. Ruddle
{"title":"The effect of environment characteristics and user interaction on levels of virtual environment sickness","authors":"R. Ruddle","doi":"10.1109/VR.2004.76","DOIUrl":"https://doi.org/10.1109/VR.2004.76","url":null,"abstract":"Data are reported for symptoms of virtual environment (VE) sickness that arose in 10 behavioral experiments. In total, 134 participants took part in the experiments and were immersed in VEs for approximately 150 hours. Nineteen of the participants reported major symptoms and two were physically sick. The tasks that participants ' performed ranged from manipulating virtual objects that they \"held\" in their hands, to traveling distances of 10 km or more while navigating virtual mazes. The data are interpreted within a framework provided by the virtual environment description and classification system. Environmental dimensions and visual complexity had little effect on the severity of participants ' symptoms. Long periods of immersion tended to produce major ocular-motor symptoms. Nausea was affected by the type of movement made to control participants ' view, and was particularly severe when participants had to spend substantial amounts of time (3%) looking steeply downwards at their virtual feet. Contrary to expectations, large rapid movements had little effect on most participants, and neither did movements that were not under participants ' direct control.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":" 38","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120828772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
HIVE: a highly scalable framework for DVE HIVE:一个高度可扩展的DVE框架
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.41
Zonghui Wang, Xiaohong Jiang, Jiaoying Shi
{"title":"HIVE: a highly scalable framework for DVE","authors":"Zonghui Wang, Xiaohong Jiang, Jiaoying Shi","doi":"10.1109/VR.2004.41","DOIUrl":"https://doi.org/10.1109/VR.2004.41","url":null,"abstract":"With the increasing requirements for distributed virtual environment (DVE): supporting larger number of participants and providing more smooth roaming and interactions, scalability is becoming a key issue. In this paper, we explore the scalability of participants and the scalability of servers, and mainly focus on three aspects: system architecture, communication model and interest mechanism. We present our middleware platform, HIVE, providing a variety of services such as data distribution, communication, event notification, etc. To achieve the reusability and interoperability of DVE applications, the interface specification of high level architecture (HLA) is employed as the reference. HIVE also contains the back-ends, which the middleware services depend upon. On HIVE, users can develop scalable DVE applications easily and quickly, concentrating on not the detail of distribution but the application logic. Finally an experimental demo on HIVE is given.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125024854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Virtual leading blocks for the deaf-blind: a real-time way-finder by verbal-nonverbal hybrid interface and high-density RFID tag space 聋哑盲人虚拟导盲块:基于语言-非语言混合接口和高密度RFID标签空间的实时寻路器
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.83
Tomohiro Amemiya, Jun Yamashita, K. Hirota, M. Hirose
{"title":"Virtual leading blocks for the deaf-blind: a real-time way-finder by verbal-nonverbal hybrid interface and high-density RFID tag space","authors":"Tomohiro Amemiya, Jun Yamashita, K. Hirota, M. Hirose","doi":"10.1109/VR.2004.83","DOIUrl":"https://doi.org/10.1109/VR.2004.83","url":null,"abstract":"In this paper, we discuss application possibilities of augmented reality technologies in the field of mobility support for the deaf blind. We propose the navigation system called virtual leading blocks for the deaf-blind, which consists of a wearable interface for Finger-Braille, one of the commonly used communication methods among deaf-blind people in Japan, and a ubiquitous environment for barrier-free application, which consists of floor-embedded active radio-frequency identification (RFID) tags. The wearable Finger-Braille interface using two Linux-based wristwatch computers has been developed as a hybrid interface of verbal and nonverbal communication in order to inform users of their direction and position through the tactile sensation. We propose the metaphor of \"watermelon splitting\" for navigation by this system and verify the feasibility of the proposed system through experiments.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129745683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Pre-surgical cranial implant design using the PARIS/spl trade/ prototype 使用PARIS/spl贸易/原型进行术前颅种植体设计
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.1310075
C. Scharver, R. Evenhouse, Andrew E. Johnson, Jason Leigh
{"title":"Pre-surgical cranial implant design using the PARIS/spl trade/ prototype","authors":"C. Scharver, R. Evenhouse, Andrew E. Johnson, Jason Leigh","doi":"10.1109/VR.2004.1310075","DOIUrl":"https://doi.org/10.1109/VR.2004.1310075","url":null,"abstract":"Repairing severe human skull injuries requires customized cranial implants, and current visualization research aims to develop a new approach to create these implants. Following pre-surgical design techniques pioneered at the University of Illinois at Chicago (VIC) in 1996, researchers have developed an immersive cranial implant application incorporating haptic force feedback and augmented reality. The application runs on the personal augmented reality immersive system (PARIS/spl trade/), allowing the modeler to see clearly both his hands and the virtual workspace. The strengths of multiple software libraries are maximized to simplify development. This research lays the foundation to eventually replace the traditional modeling and evaluation processes.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128721186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Omnistereo for panoramic virtual environment display systems 全景虚拟环境显示系统的Omnistereo
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.56
Andreas Simon, Randall C. Smith, Richard R. Pawlicki
{"title":"Omnistereo for panoramic virtual environment display systems","authors":"Andreas Simon, Randall C. Smith, Richard R. Pawlicki","doi":"10.1109/VR.2004.56","DOIUrl":"https://doi.org/10.1109/VR.2004.56","url":null,"abstract":"This paper discusses the use of omnidirectional stereo for panoramic virtual environments. It presents two methods for real-time rendering of omnistereo images. Conventional perspective stereo is correct everywhere in the visual field, but only in one view direction. Omnistereo is correct in every view direction, but only in the center of the visual field, degrading in the periphery. Omnistereo images make it possible to use wide field of view virtual environment display systems-like the CAVE/spl trade/-without head tracking, and still show correct stereoscopic depth over the full 360/spl deg/ viewing circle. This allows the use of these systems as true multi-user displays, where viewers can look around and browse a panoramic scene independently. Because there is no need to rerender the image according to view direction, we can also use this technique to present static omnistereo images, generated by offline rendering or real image capture, in panoramic displays. We have implemented omnistereo in a four-sided CAVE/spl trade/ and in a 240/spl deg/ i-Con/spl trade/ curved screen projection system. Informal user evaluation confirms that omnistereo images present a seamless image with correct stereoscopic depth in every view direction without head tracking.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132073604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Interactive retrieval of 3D virtual shapes using physical objects 使用物理对象的三维虚拟形状的交互式检索
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.47
Hiroyasu Ichida, Yuichi Itoh, Y. Kitamura, F. Kishino
{"title":"Interactive retrieval of 3D virtual shapes using physical objects","authors":"Hiroyasu Ichida, Yuichi Itoh, Y. Kitamura, F. Kishino","doi":"10.1109/VR.2004.47","DOIUrl":"https://doi.org/10.1109/VR.2004.47","url":null,"abstract":"We present a novel method for interactive retrieval of virtual 3D shapes using physical objects. Our method is based on simple physical 3D interaction with a set of tangible blocks. As the user connects blocks, the system automatically recognizes the shape of the constructed physical structure and picks similar 3D virtual shapes from a preset model database, in real time. Our system fully supports interactive retrieval of 3D virtual models in an extremely simple fashion, which is completely nonverbal and cross-cultural. These advantages make it an ideal interface for inexperienced users, previously barred from many applications that include 3D shape retrieval tasks.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121333688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Texture overlay onto deformable surface using HMD 纹理覆盖到可变形的表面使用HMD
IEEE Virtual Reality 2004 Pub Date : 2004-03-27 DOI: 10.1109/VR.2004.74
M. Emori, H. Saito
{"title":"Texture overlay onto deformable surface using HMD","authors":"M. Emori, H. Saito","doi":"10.1109/VR.2004.74","DOIUrl":"https://doi.org/10.1109/VR.2004.74","url":null,"abstract":"We propose a system that overlays textures onto the deformable surface of an object in real time using HMD. We assume that the surface projected onto an HMD image consists of curved surfaces which can be approximated by 2D geometric curved surface, so that we can deform textures using the matrix of 2D geometric transformation and the deformed textures are overlaid onto the HMD image. The system computes the transformation matrix in each frame, the textures are overlaid in real time even if an observer with HMD moves or deforms the surface. In the system, we select a book as the object with deformable shape and documents as the textures. Therefore, the observer can read digitized documents as if he reads real books.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132908761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信