{"title":"Virtual 3D World Construction by Inter-connecting Photograph-based 3D Models","authors":"Takashi Aoki, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2008.4480782","DOIUrl":"https://doi.org/10.1109/VR.2008.4480782","url":null,"abstract":"We present a novel approach for constructing a virtual 3D world from a sparse set of 2D photograph images. In our approach, we do not construct a large 3D world directly from the images, instead we construct several 3D models from the images and inter-connect them. Each 3D model is photograph-based 3D model that constitutes a few faces and a high-resolution photographic texture image. By arranging some photograph-based 3D models in a 3D scene, we construct a virtual 3D world. To inter-connect these 3D models and represent a unified 3D world, we render the 3D models by blending them together according to view-point position and rotation. Using our novel approach, it is possible to semi-automatically construct a virtual 3D world from fewer photograph images.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122324845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bezier Surface Editing Using Marker-based Augmented Reality","authors":"David O'Gwynn, J. Johnstone","doi":"10.1109/VR.2008.4480800","DOIUrl":"https://doi.org/10.1109/VR.2008.4480800","url":null,"abstract":"This paper describes a marker-based Augmented Reality system for editing tensor-product Bezier surfaces. It uses both a small- scale multi-marker mat and a two-marker wand as its interactive elements. The multi-marker mat establishes a coordinate frame for the display of the surface. Because of its size, it allows the user to rotate and translate the mat physically, even while editing it. The wand is used to select individual control points and edit their 3D position with respect to the surface's coordinate frame. Differentiation between selection and modification is accomplished through a bimodal association of two markers at the end of the wand. The wand's mode is switched by rolling it between the fingers.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116242667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extending X3D with Perceptual Auditory Properties","authors":"Katharina Garbe, I. Herbst","doi":"10.1109/VR.2008.4480787","DOIUrl":"https://doi.org/10.1109/VR.2008.4480787","url":null,"abstract":"In this paper we present our approach to extend X3D with audio effects like reverberation, echo or distortion. We describe our concept of \"sound textures\" and the nodes and fields we added to the X3D specification to model ambient audio environments. Furthermore we describe our implementation of our audio-visual renderer.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131644248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Evaluation of Immersive Displays for Virtual Human Experiences","authors":"K. Johnsen, Benjamin C. Lok","doi":"10.1109/VR.2008.4480764","DOIUrl":"https://doi.org/10.1109/VR.2008.4480764","url":null,"abstract":"This paper compares a large-screen display to a non-stereo head-mounted display (HMD) for a virtual human (VH) experience. As VH experiences are increasingly being applied to training, it is important to understand the effect of immersive displays on user interaction with VHs. Results are reported from a user study (n=27) of 10 minute human-VH interactions in a VH experience which allows medical students to practice communication skills with VH patients. Results showed that student self-ratings of empathy, a critical doctor-patient communication skill, were significantly higher in the HMD; however, when compared to observations of student behavior, students using the large-screen display were able to more accurately reflect on their use of empathy. More work is necessary to understand why the HMD inhibits students' ability to self-reflect on their use of empathy.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127656004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New Rendering Approach for Composable Volumetric Lenses","authors":"Christopher M. Best, C. Borst","doi":"10.1109/VR.2008.4480772","DOIUrl":"https://doi.org/10.1109/VR.2008.4480772","url":null,"abstract":"Various virtual and augmented reality systems include volumetric lenses, an extension of 2D magic lenses to 3D volumes in which effects are applied to scene elements. We present a new 3D volumetric lens rendering system that differs fundamentally from other approaches and that is the first to address efficient real-time composition of multiple 3D lenses. A lens factory module composes chainable shader programs for rendering composite visual styles and geometry of intersection regions. Geometry is handled by Boolean combinations of region tests in fragment shaders, which allows both convex and non-convex CSG volumes for lens shape. Efficiency is further addressed by a region analyzer module and by broad-phase culling. Finally, we consider the handling of order effects for composed 3D lenses.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132925110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Load Simulation and Metrics Framework for Distributed Virtual Reality","authors":"H. Singh, D. Gračanin, K. Matkovič","doi":"10.1109/VR.2008.4480804","DOIUrl":"https://doi.org/10.1109/VR.2008.4480804","url":null,"abstract":"We describe a simple load-measure-model method for analyzing the scalability of distributed virtual environments (DVEs). We use a load simulator and three metrics to measure a DVE's engine with varying numbers of simulated users. Our load simulator logs in as a remote client and plays according to how users played during the conducted user study. Two quality of virtuality metrics, fidelity and consistency, describe the user's experience in the DVE. One engine performance metric provides the cycle time of the engine's primary loop. Simulation results (up to 420 users) are discussed.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124734670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Uncertainty Boundaries for Complex Objects in Augmented Reality","authors":"Jiajian Chen, B. MacIntyre","doi":"10.1109/VR.2008.4480784","DOIUrl":"https://doi.org/10.1109/VR.2008.4480784","url":null,"abstract":"Registration errors between the physical world and computer- generated objects are a central problem in Augmented Reality (AR) systems. Some existing AR systems have demonstrated how to dynamically estimate registration errors based on estimates of spatial errors in the system. Using these error estimates, these systems also demonstrated a number of ways of ameliorating the effects of registration error. One central part of this previous work was the creation and use of error regions around objects; unfortunately, the analytic methods used only created accurate regions for simple convex objects. In this paper, we present a simple and stable algorithm for generating the uncertainty regions for complex objects, including non-convex objects and objects with interior holes. We demonstrate how our approach can be used to create a set of more accurate error-based highlights in the presence of registration error, and also be used as a general highlighting mechanism.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130236927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Providing a Wide Field of View for Effective Interaction in Desktop Tangible Augmented Reality","authors":"Seokhee Jeon, G. Kim","doi":"10.1109/VR.2008.4480743","DOIUrl":"https://doi.org/10.1109/VR.2008.4480743","url":null,"abstract":"This paper proposes to generate and provide wide field of view (FOV) augmented reality (AR) imagery by mosaicing images from smaller fields of moving views in \"desktop\" tangible AR (DTAR) environments. AR systems usually offer a limited FOV into the interaction space, constrained by the FOV of the camera and/or the display, which causes serious usability problems especially when the interaction space is large and many tangible props/markers are used. This problem is more apparent in DTAR environments in which an upright frontal display is used, instead of a head mounted display. This can be solved partly by placing the camera at a relatively far location or by using multiple cameras and increasing the working FOV. However, as for the former solution, the large distance between the interaction space and the fixed camera decreases the tracking and recognition reliability of the tangible markers, and the latter solution introduces significant additional set-up, cost, and computational load. Thus, we propose to use a mosaiced image to provide wide FOV AR imagery. We experimentally compare our solution, i.e. to offer the entire view of the interaction space at once, to other nominal AR set-ups. The experimental results show that, despite some amounts of visual artifacts due to the imperfect mosaicing, the proposed solution can improve task performance and usability for a typical DTAR system. Our findings should contribute to making AR systems more practical and usable for the mass.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124542990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Human + Tangible Interface = Mixed Reality Human An Initial Exploration with a Virtual Breast Exam Patient","authors":"Aaron Kotranza, Benjamin C. Lok","doi":"10.1109/VR.2008.4480757","DOIUrl":"https://doi.org/10.1109/VR.2008.4480757","url":null,"abstract":"Virtual human (VH) experiences are receiving increased attention for training real-world interpersonal scenarios. Communication in interpersonal scenarios consists of not only speech and gestures, but also relies heavily on haptic interaction - interpersonal touch. By adding haptic interaction to VH experiences, the bandwidth of human-VH communication can be increased to approach that of human-human communication. To afford haptic interaction, a new species of embodied agent is proposed - mixed reality humans (MRHs). A MRH is a virtual human embodied by a tangible interface that shares the same registered space. The tangible interface affords the haptic interaction that is critical to effective simulation of interpersonal scenarios. We applied MRHs to simulate a virtual patient requiring a breast cancer screening (medical interview and physical exam). The design of the MRH patient is presented. This paper also presents the results of a pilot study in which eight (n = 8) physician-assistant students performed a clinical breast exam on the MRH patient. Results show that when afforded haptic interaction with a MRH patient, users demonstrated interpersonal touch and social engagement similarly to interacting with a human patient.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125638755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Managing Visual Clutter: A Generalized Technique for Label Segregation using Stereoscopic Disparity","authors":"Stephen D. O'Connell, Magnus Axholt, S. Ellis","doi":"10.1109/VR.2008.4480769","DOIUrl":"https://doi.org/10.1109/VR.2008.4480769","url":null,"abstract":"We present a new technique for managing visual clutter caused by overlapping labels in complex information displays. This technique, \"label layering\", utilizes stereoscopic disparity as a means to segregate labels in depth for increased legibility and clarity. By distributing overlapping labels in depth, we have found that selection time during a visual search task in situations with high levels of overlap is reduced by four seconds or 24%. Our data show that the depth order of the labels must be correlated with the distance order of their corresponding objects. Since a random distribution of stereoscopic disparity in contrast impairs performance, the benefit is not solely due to the disparity-based image segregation. An algorithm using our label layering technique accordingly could be an alternative to traditional label placement algorithms that avoid label overlap at the cost of distracting motion, symbology dimming or label size reduction.","PeriodicalId":173744,"journal":{"name":"2008 IEEE Virtual Reality Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125448583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}