{"title":"Scalable 3D representation for 3D video display in a large-scale space","authors":"I. Kitahara, Y. Ohta","doi":"10.1109/VR.2003.1191120","DOIUrl":"https://doi.org/10.1109/VR.2003.1191120","url":null,"abstract":"The authors introduce their research for realizing a 3D video display system in a very large-scale space such as a soccer stadium, concert hall, etc. They propose a method for describing the shape of a 3D object with a set of planes in order to synthesize a novel view of the object effectively. The most effective layout of the planes can be determined based on the relative locations of an observer's viewing position, multiple cameras, and 3D objects. A method is described for controlling the LOD of the 3D representation by adjusting the orientation, interval, and resolution of planes. The data size of the 3D model and the processing time can be reduced drastically. The effectiveness of the proposed method is demonstrated by experimental results.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124868009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
U. Neumann, Suya You, Jinhui Hu, Bolan Jiang, Jong Weon Lee
{"title":"Augmented virtual environments (AVE): dynamic fusion of imagery and 3D models","authors":"U. Neumann, Suya You, Jinhui Hu, Bolan Jiang, Jong Weon Lee","doi":"10.1109/VR.2003.1191122","DOIUrl":"https://doi.org/10.1109/VR.2003.1191122","url":null,"abstract":"An augmented virtual environment (AVE) fuses dynamic imagery with 3D models. The AVE provides a unique approach to visualize and comprehend multiple streams of temporal data or images. Models are used as a 3D substrate for the visualization of temporal imagery, providing improved comprehension of scene activities. The core elements of AVE systems include model construction, sensor tracking, real-time video/image acquisition, and dynamic texture projection for 3D visualization. This paper focuses on the integration of these components and the results that illustrate the utility and benefits of the resulting augmented virtual environment.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrated virtual human interface system with portable virtual reality capability","authors":"B. Kiss, B. Takács, G. Szijártó","doi":"10.1109/VR.2003.1191188","DOIUrl":"https://doi.org/10.1109/VR.2003.1191188","url":null,"abstract":"We demonstrate a photo-realistic, interactive virtual human agent application, called the Virtual Human Interface that employs virtual people to provide digital media users with information, learning services and entertainment in a highly personalized, visually rich virtual reality environment. The virtual digital human is capable of seeing, detecting and recognizing one or multiple people in front of the display and internally model, adapt to, and modulate the user’s mood and emotional state via advanced facial information processing techniques. Additional real-time modules include a portable head mounted VR system to enhance the experience and live imagery captured from a video source to support augmented reality applications.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123768654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Lécuyer, Pascal Mobuchon, C. Mégard, J. Perret, C. Andriot, J. Colinot
{"title":"HOMERE: a multimodal system for visually impaired people to explore virtual environments","authors":"A. Lécuyer, Pascal Mobuchon, C. Mégard, J. Perret, C. Andriot, J. Colinot","doi":"10.1109/VR.2003.1191147","DOIUrl":"https://doi.org/10.1109/VR.2003.1191147","url":null,"abstract":"The paper describes the HOMERE system: a multimodal system dedicated to visually impaired people to explore and navigate inside virtual environments. The system addresses three main applications: preparation for the visit of an existing site, training for the use of a blind cane, and ludic exploration of virtual worlds. The HOMERE system provides the user with different sensations when navigating inside a virtual world: a force feedback corresponding to the manipulation of a virtual blind cane, a thermal feedback corresponding to the simulation of a virtual sun, and an auditory feedback in spatialized conditions corresponding to the ambient atmosphere and specific events in the simulation. A visual feedback of the scene is also provided to enable sighted people to follow the navigation of the main user. HOMERE has been tested by several visually impaired people who were all confident about the potential of this prototype.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114228730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Syzygy: native PC cluster VR","authors":"Benjamin Schaeffer, Camille Goudeseune","doi":"10.1109/VR.2003.1191116","DOIUrl":"https://doi.org/10.1109/VR.2003.1191116","url":null,"abstract":"The Syzygy software library consists of tools for programming VR applications on PC clusters. Since the PC cluster environment presents application development constraints, it is impossible to simultaneously optimize for efficiency, flexibility, and portability between the single-computer and cluster cases. Consequently Syzygy includes two application frameworks: a distributed scene graph framework for rendering a single application's graphics database on multiple rendering clients, and a master/slave framework for applications with multiple synchronized instances. Syzygy includes a simple distributed OS and supports networked input devices, sound renderers, and graphics renderers, all built on a robust networking layer.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132679843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Chua, Rebecca Crivella, Bo Daly, Ning Hu, Russ Schaaf, David Ventura, Todd Camill, J. Hodgins, R. Pausch
{"title":"Training for physical tasks in virtual environments: Tai Chi","authors":"P. Chua, Rebecca Crivella, Bo Daly, Ning Hu, Russ Schaaf, David Ventura, Todd Camill, J. Hodgins, R. Pausch","doi":"10.1109/VR.2003.1191125","DOIUrl":"https://doi.org/10.1109/VR.2003.1191125","url":null,"abstract":"We present a wireless virtual reality system and a prototype full body Tai Chi training application. Our primary contribution is the creation of a virtual reality system that tracks the full body in a working volume of 4 meters by 5 meters by 2.3 meters high to produce an animated representation of the user with 42 degrees of freedom. This - combined with a lightweight (<3 pounds) belt-worn video receiver and head-mounted display - provides a wide area, untethered virtual environment that allows exploration of new application areas. Our secondary contribution is our attempt to show that user interface techniques made possible by such a system can improve training for a full body motor task. We tested several immersive techniques, such as providing multiple copies of a teacher's body positioned around the student and allowing the student to superimpose his body directly over the virtual teacher None of these techniques proved significantly better than mimicking traditional Tai Chi instruction, where we provided one virtual teacher directly in front of the student. We consider the implications of these findings for future motion training tasks.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132268289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VIS-Tracker: a wearable vision-inertial self-tracker","authors":"E. Foxlin, L. Naimark","doi":"10.1109/VR.2003.1191139","DOIUrl":"https://doi.org/10.1109/VR.2003.1191139","url":null,"abstract":"We present a demonstrated and commercially viable self-tracker, using robust software that fuses data from inertial and vision sensors. Compared to infrastructure-based trackers, self-trackers have the advantage that objects can be tracked over an extremely wide area, without the prohibitive cost of an extensive network of sensors or emitters to track them. So far, most AR research has focused on the long-term goal of a purely vision-based tracker that can operate in arbitrary unprepared environments, even outdoors. We instead chose to start with artificial fiducials, in order to quickly develop the first self-tracker which is small enough to wear on a belt, low cost, easy to install and self-calibrate, and low enough latency to achieve AR registration. We also present a roadmap for how we plan to migrate from artificial fiducials to natural ones. By designing to the requirements of AR, our system can easily handle the less challenging applications of wearable VR systems and robot navigation.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125245009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An experiment comparing double exponential smoothing and Kalman filter-based predictive tracking algorithms","authors":"J. Laviola","doi":"10.1109/VR.2003.1191164","DOIUrl":"https://doi.org/10.1109/VR.2003.1191164","url":null,"abstract":"We present an experiment comparing double exponential smoothing and Kalman filter-based predictive tracking algorithms with derivative free measurement models. Our results show that the double exponential smoothers run approximately 135 times faster with equivalent prediction performance. The paper briefly describes the algorithms used in the experiment and discusses the results.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123158890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scalable VR application authoring","authors":"P. Hartling","doi":"10.1109/VR.2003.1191177","DOIUrl":"https://doi.org/10.1109/VR.2003.1191177","url":null,"abstract":"This course will provide attendees with the technical information needed to create their own compelling, scalable, interactive VR applications using VR Juggler. The, course begins with the foundations needed for building VR Juggler applications. It follows with a session on VR Juggler scalability from shared memory high-end workstations to clusters of commodity PCs. The following sessions focus on effective use of VR Juggler as a desktop-to-immersive visualization tool, including the portability of interaction methods. The course concludes with advanced VR Juggler embedded features such as virtual characters and collaboration.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121039506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Baillot, Joshua J. Eliason, G. Schmidt, J. Swan, Dennis G. Brown, S. Julier, M. Livingston, L. Rosenblum
{"title":"Evaluation of the ShapeTape tracker for wearable, mobile interaction","authors":"Y. Baillot, Joshua J. Eliason, G. Schmidt, J. Swan, Dennis G. Brown, S. Julier, M. Livingston, L. Rosenblum","doi":"10.1109/VR.2003.1191165","DOIUrl":"https://doi.org/10.1109/VR.2003.1191165","url":null,"abstract":"We describe two engineering experiments designed to evaluate the effectiveness of Measurand's ShapeTape for wearable, mobile interaction. Our initial results suggest that the ShapeTape is not appropriate for interactions which require a high degree of accuracy. However, ShapeTape is capable of reproducing the qualitative motion the user is performing and thus could be used to support 3D gesture-based interaction.","PeriodicalId":105245,"journal":{"name":"IEEE Virtual Reality, 2003. Proceedings.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123536152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}