{"title":"A wireless, inexpensive optical tracker for the CAVE/sup TM/","authors":"E. Sharlin, P. Figueroa, Mark W. Green, B. Watson","doi":"10.1109/VR.2000.840508","DOIUrl":"https://doi.org/10.1109/VR.2000.840508","url":null,"abstract":"CAVE/sup TM/ displays offer many advantages over other virtual reality (VR) displays, including a large, unencumbered viewing space. Unfortunately, the typical tracking subsystems used with CAVE/sup TM/ displays tether the user and lessen this advantage. We have designed a simple, low-cost foot tracker that is wireless, leaving the user free to move. The tracker can be assembled for less than $200 US, and achieves an accuracy of /spl plusmn/10 cm at a 20-Hz sampling rate. We have tested the prototype with two applications: a visualization supporting close visual inspection, and a walkthrough of the campus. Although the tracking was convincing, it was clear that the tracker's limitations make it less than ideal for applications requiring precise visual inspection. However the freedom of motion allowed by the tracker was a compelling supplement to our campus walkthrough, allowing users to stroll and look around corners.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"115 21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126376776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic deformable models for enhanced haptic rendering in virtual environments","authors":"R. Ramanathan, Dimitris N. Metaxas","doi":"10.1109/VR.2000.840360","DOIUrl":"https://doi.org/10.1109/VR.2000.840360","url":null,"abstract":"Currently there are no deformable model implementations that model a wide range of geometric deformations while providing realistic force feedback for use in virtual environments with haptics. The few models that exist are computationally very expensive, are limited in terms of shape coverage and do not provide proper haptic feedback. We use dynamic deformable models with local and global deformations governed by physical principles in order to provide efficient and true force feedback. We extend the shape class of Deformable Superquadrics (DeSuq) to provide compact geometric representation using few parameters, while at the same time providing realistic haptic viscoelastic feedback. Dynamics associated with rigid and deformable bodies are modeled by the use of the Lagrange equations. Implementation of these is currently under progress using GHOST/sup TM/ libraries on a PHANToM/sup TM/ haptic device.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122887654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extending locales: awareness management in MASSIVE-3","authors":"Jim Purbrick, C. Greenhalgh","doi":"10.1109/VR.2000.840515","DOIUrl":"https://doi.org/10.1109/VR.2000.840515","url":null,"abstract":"In the MASSIVE-3 system we have adopted the locale approach to organising large virtual environments, and extended it, integrating the notion of awareness, adding support for alternative representations of locales, integrating functional and organisation data management and introducing a flexible framework for defining dynamic locale selection policies.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Systematic design of interactive illustration techniques for user guidance in virtual environments","authors":"V. Paelke","doi":"10.1109/VR.2000.840500","DOIUrl":"https://doi.org/10.1109/VR.2000.840500","url":null,"abstract":"A usability-centred design approach is critically important for content development of virtual environments in real-world applications. We provide a framework that allows one to review existing design techniques and tools against the special requirements of virtual environments, so that appropriate methods can be selected. Focusing on the specific usability requirement of interaction guidance in presentation and instructional environments and its implementation through interactive illustration techniques, we demonstrate the use of the framework. Based on the specific requirements, we derive a suitable design process for interactive illustration techniques and identify appropriate techniques from multimedia and GUI design. The use of the design process and techniques is then illustrated with an example.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134601421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"inTouch: interactive multiresolution modeling and 3D painting with a haptic interface","authors":"Arthur D. Gregory, Stephen A. Ehmann, M. Lin","doi":"10.1109/VR.2000.840362","DOIUrl":"https://doi.org/10.1109/VR.2000.840362","url":null,"abstract":"We present an intuitive 3D interface for interactively editing and painting a polygonal mesh using a force feedback device. An artist or a designer can use the system to create and refine a three-dimensional multiresolution polygonal mesh. Its appearance can be further enhanced by directly painting onto its surface. The system allows users to naturally create complex forms and patterns not only aided by visual feedback, but also by their sense of touch.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129465978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Thing Growing: autonomous characters in virtual reality interactive fiction","authors":"J. Anstey, Dave Pape, D. Sandin","doi":"10.1109/VR.2000.840366","DOIUrl":"https://doi.org/10.1109/VR.2000.840366","url":null,"abstract":"This paper describes \"The Thing Growing\", a work of interactive fiction implemented in virtual reality, in which the user is the main protagonist and interacts with computer controlled characters. This work of fiction depends on the user's emotional investment in the story and on her relationship to a central character, the Thing.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130416979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LoD management on animating face models","authors":"H. Seo, N. Magnenat-Thalmann","doi":"10.1109/VR.2000.840494","DOIUrl":"https://doi.org/10.1109/VR.2000.840494","url":null,"abstract":"Presents our work on a level-of-detail (LoD) technique for human-like face models in virtual environments. Conventional LoD techniques have been adapted to allow facial animation on simplified geometric models. This includes the optimization of both geometric and animation parameters. Simplified models are generated in a region-based manner, considering the mobility of each region. The animation process is decomposed into two sub-processes, and each step is optimized. In the MPA (minimum perceptible action) level optimization, a hierarchical structure is devised for the multi-level animation model. The deformation level is simplified by reducing the number of control points. At run-time, the animation level is selected in combination with viewpoint information at the geometric level.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125754891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An architecture for collaboration in virtual environments","authors":"S. Shirmohammadi, N. Georganas","doi":"10.1109/VR.2000.840511","DOIUrl":"https://doi.org/10.1109/VR.2000.840511","url":null,"abstract":"We introduce an architecture for performing closely-coupled collaborative tasks in virtual environments. Our architecture consists of an application-layer model based on higher-level user interactions with shared objects, and a communication protocol for dissemination of collaborative update messages among participants.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"47 26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122821735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Spoelder, L. Renambot, D. Germans, H. Bal, F. Groen
{"title":"Man multi-agent interaction in VR: a case study with RoboCup","authors":"H. Spoelder, L. Renambot, D. Germans, H. Bal, F. Groen","doi":"10.1109/VR.2000.840519","DOIUrl":"https://doi.org/10.1109/VR.2000.840519","url":null,"abstract":"We discuss the use of virtual reality (VR) techniques for interaction between humans and a multi-agent system in the context of RoboCup. The goal of RoboCup is to let teams of cooperating autonomous agents play a soccer match, using either robots or simulated players. We use RoboCup to study distributed collaborative applications, which allow multiple users at different geographic locations to cooperate, by interacting in real time through a shared simulation program. Our objective is to construct a VR environment in which humans at different locations can play along with a running RoboCup simulation in a natural way. The simulation system consists of the Soccer Server and a set of processes modeling the players. The server keeps track of the state of the game: provides the players with information on the game, and enforces the rules. The players request state information and autonomously calculate a behavior, sending the server commands that consist of accelerations, turns and kicks. The server discretizes time into slots, only one command is executed per time slot. We have developed a 3D visualization system that allows a user in a CAVE to interact with the soccer simulation software.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"511 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116659094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A method of constructing a telexistence visual system using fixed screens","authors":"Y. Yanagida, T. Maeda, S. Tachi","doi":"10.1109/VR.2000.840489","DOIUrl":"https://doi.org/10.1109/VR.2000.840489","url":null,"abstract":"Projection-based visual display systems are expected to be effective platforms for virtual reality (VR) applications in which the displayed images are generated by computer graphics using 3D models of virtual worlds. However, these kinds of visual displays, as well as other kinds of fixed screen-based displays, such as various head-tracked displays (HTDs) and conventional CRT displays, have not been utilized to achieve precise telexistence in a real environment, which requires appropriate stereoscopic video images corresponding to the operator's head motion. We found that the time-varying off-axis projection required in these systems has prevented fixed screen-based displays from being used for telexistence, as ordinary cameras only have fixed and symmetric fields of view about the optical axis. After evaluating the problem, a method to realize a live video-based telexistence system with a fixed screen is proposed, aiming to provide the operator with a natural 3D sensation of presence. The key component of our method is a feature that keeps the orientation of the cameras fixed, regardless of the operator's head motion. Such a feature was implemented by designing a constant-orientation link mechanism.","PeriodicalId":375299,"journal":{"name":"Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129379001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}