{"title":"An approach to Distributed Virtual Environment performance modeling: Addressing system complexity and user behavior","authors":"H. Singh, D. Gračanin","doi":"10.1109/VR.2012.6180887","DOIUrl":"https://doi.org/10.1109/VR.2012.6180887","url":null,"abstract":"Distributed Virtual Environment systems are complex systems that include graphics, physical simulation, and networked state synchronization. The system performance depends on all these components balancing the load and while sharing available computational, graphic, and network resources. We present an approach to analysis of the load balance of load and resource usage.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128151542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Hammoudi, F. Dornaika, B. Soheilian, B. Vallet, N. Paparoditis
{"title":"Generating occlusion-free textures for virtual 3D model of urban facades by fusing image and laser street data","authors":"K. Hammoudi, F. Dornaika, B. Soheilian, B. Vallet, N. Paparoditis","doi":"10.1109/VR.2012.6180927","DOIUrl":"https://doi.org/10.1109/VR.2012.6180927","url":null,"abstract":"In this paper we present relevant results of a work in progress1 that deals with the texturing of 3D urban facade models by fusing terrestrial multi-source data acquired by a Mobile Mapping System (MMS). Some of current 3D urban facade models often are textured by using images that contain parts of urban objects that belong to the street. These urban objects represent in this case occlusions since they are located between the acquisition system and the facades. We show the potential use of georeferenced images and 3D point cloud that are acquired at street level by the MMS in generating occlusion-free facade textures. We describe a methodology for reconstructing texture parts of facades that are highly occluded by wide frontal objects.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114217007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Physics-based multi-domain subspace deformation with component mode synthesis","authors":"Y. Yang, X. Guo","doi":"10.1109/VR.2012.6180886","DOIUrl":"https://doi.org/10.1109/VR.2012.6180886","url":null,"abstract":"Fast and accurate simulation of 3D soft objects is important to virtual environment and reality. Simulating 3D deformation of large model in real-time is a challenging problem as it is very computation-demanding involving intensive matrix-based operation of large scale. Reduction techniques, consequently flourish where the dynamic is computed within a subspace of much smaller size with accuracy loss. This type of technique greatly boosts the simulation performance. Currently, most reduction methods use globally-computed bases. As a result, when large local deformation occurs, global bases often fail to provide necessary freedoms at the desired region. Alternatively, we construct subspaces locally based on the linear component mode synthesis (CMS) method. The components are the mutually disjoint sub-meshes (with duplicated boundary DOFs) and the local bases are called component modes which are the displacements of the components under certain mechanical equilibrium.We greatly extend the classic CMS with the following contributions. 1) We propose a new physics-based multi-domain subspace deformable model based on CMS. The subspace is locally constructed with component modes. The computation of modes follow a compact and straightforward formulation and the pre-computation is orders-faster comparing with some global subspace techniques. 2) The classic CMS does not handle large deformations with the linear modes. We extend the idea of modal warping to CMS with co-rotational elasticity to accommodate large rotational deformation. 3) A new type of mode called degenerated constraint mode is employed which constructs the subspaces of small size at components while preserving the boundary compatibility. As a result, the simulation can be performed within a small subspace and the boundary locking artifacts are also avoided. 4) We also propose another new type of mode called user constraint mode, which prevents the reduced system from being over-constrained. 5) Based on the extended CMS, we propose several simulation strategies including the hybrid simulation with the customized local mode supersets and the skeleton-driven deformable model based on the interface hierarchy.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126675849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reducing interference between multiple structured light depth sensors using motion","authors":"Andrew Maimone, H. Fuchs","doi":"10.1109/VR.2012.6180879","DOIUrl":"https://doi.org/10.1109/VR.2012.6180879","url":null,"abstract":"We present a method for reducing interference between multiple structured light-based depth sensors operating in the same spectrum with rigidly attached projectors and cameras. A small amount of motion is applied to a subset of the sensors so that each unit sees its own projected pattern sharply, but sees a blurred version of the patterns of other units. If high spacial frequency patterns are used, each sensor sees its own pattern with higher contrast than the patterns of other units, resulting in simplified pattern disambiguation. An analysis of this method is presented for a group of commodity Microsoft Kinect color-plus-depth sensors with overlapping views. We demonstrate that applying a small vibration with a simple motor to a subset of the Kinect sensors results in reduced interference, as manifested as holes and noise in the depth maps. Using an array of six Kinects, our system reduced interference-related missing data from from 16.6% to 1.4% of the total pixels. Another experiment with three Kinects showed an 82.2% percent reduction in the measurement error introduced by interference. A side-effect is blurring in the color images of the moving units, which is mitigated with post-processing. We believe our technique will allow inexpensive commodity depth sensors to form the basis of dense large-scale capture systems.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115992708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Yanagida, T. Tanakamaru, Hiroki Nagayanagi, Yuki Nomura, Toshimasa Aritake
{"title":"Flat-shaped, front-face-drive scent projector","authors":"Y. Yanagida, T. Tanakamaru, Hiroki Nagayanagi, Yuki Nomura, Toshimasa Aritake","doi":"10.1109/VR.2012.6180930","DOIUrl":"https://doi.org/10.1109/VR.2012.6180930","url":null,"abstract":"In this study, a hardware configuration for reducing the size of a scent projector is proposed, and its performance is examined using a prototype system. A scent projector is a system that delivers a small amount of scented air to the user's nose by channeling the scent through a vortex ring. A projector enables scents to be presented locally in time and space without requiring the user to wear any special apparatus. Such localization is important when scents are presented synchronously with other sensory stimuli. However, scent projectors make use of so-called “air cannons” that usually have significant volume, making their size a major drawback. To solve this problem, we examined the principal parameters that affect the performance of a scent projector, and found that the inner volume of the air cannon does not seriously affect the range of a vortex ring. Based on this analysis, we designed a flat-shaped scent projector that has zero volume in the idle state and built a prototype system. Our prototype performed almost as well as existing scent projectors with larger volume.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116187097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The effects of navigational control and environmental detail on learning in 3D virtual environments","authors":"E. Ragan, Karl Huber, B. Laha, D. Bowman","doi":"10.1109/VR.2012.6180868","DOIUrl":"https://doi.org/10.1109/VR.2012.6180868","url":null,"abstract":"Studying what design features are necessary and effective for educational virtual environments (VEs), we focused on two design issues: level of environmental detail and method of navigation. In a controlled experiment, participants studied animal facts distributed among different locations in an immersive VE. Participants viewed the information as either an automated tour through the environment or with full navigational control. The experiment also compared two levels of environmental detail: a sparse environment with only the animal fact cards and a detailed version that also included landmark items and ground textures. The experiment tested memory and understanding of the animal information. Though neither environmental detail nor navigation type significantly affected learning outcomes, the results suggest that manual navigation may have negatively affected the learning activity. Also, learning scores were correlated with both spatial ability and video game usage, suggesting that educational VEs may not be an appropriate presentation method for some learners.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129992384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huaiyu Liu, M. Bowman, R. Adams, Dan Lake, Jerry O. Talton, Sean Koehl, Robert Noradki
{"title":"Navigating large data sets in virtual worlds","authors":"Huaiyu Liu, M. Bowman, R. Adams, Dan Lake, Jerry O. Talton, Sean Koehl, Robert Noradki","doi":"10.1109/VR.2012.6180938","DOIUrl":"https://doi.org/10.1109/VR.2012.6180938","url":null,"abstract":"The ever increasing mass of information leads to new challenges on analyzing or navigating the large data sets. Combining visual perception and interaction capabilities with the enormous storage and computational power of today's computer systems, especially with the rise of 3D virtual worlds, has great potential in providing deeper immersion and intuitive interactions with large data sets. In this demo, we exploit the potential of navigating large data-sets in a 3D virtual world, by transforming raw data sets into semantically rich, high level interactions and presenting data through rich, real-time visualization. We also explore the use of various digital devices that most users have available to build “distributed interfaces” and provide capabilities that make interactions within the 3D space, and with the data sets presented in the 3D space, more natural and expressive.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125553126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spatial augmented reality for environmentally-lit real-world objects","authors":"Alvin J. Law, Daniel G. Aliaga","doi":"10.1109/VR.2012.6180867","DOIUrl":"https://doi.org/10.1109/VR.2012.6180867","url":null,"abstract":"One augmented reality approach is to use digital projectors to alter the appearance of a physical scene, avoiding the need for head-mounted displays or special goggles. Instead, spatial augmented reality (SAR) systems depend on having sufficient light radiance to compensate the surface's colors to those of a target visualization. However, standard SAR systems in dark room settings may suffer from insufficient light radiance causing bright colors to exhibit unexpected color shifts, resulting in a misleading visualization. We introduce a SAR framework which focuses on minimally altering the appearance of arbitrarily shaped and colored objects to exploit the presence of environment/room light as an additional light source to achieve compliancy for bright colors. While previous approaches have compensated for environment light, none have explicitly exploited the environment light to achieve bright, previously incompliant colors. We implement a full working system and compared our results to solutions achievable with standard SAR systems.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126515882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Lange, S. Koenig, Eric McConnell, Chien-Yen Chang, Rick Juang, Evan A. Suma, M. Bolas, A. Rizzo
{"title":"Interactive game-based rehabilitation using the Microsoft Kinect","authors":"B. Lange, S. Koenig, Eric McConnell, Chien-Yen Chang, Rick Juang, Evan A. Suma, M. Bolas, A. Rizzo","doi":"10.1109/VR.2012.6180935","DOIUrl":"https://doi.org/10.1109/VR.2012.6180935","url":null,"abstract":"Using video games in rehabilitation settings has the potential to provide patients with fun and motivating exercise tools. Within the Medical VR and MxR groups at the USC Institute for Creative Technologies, we have been leveraging the technology of the Microsoft Kinect 3D depth-sensing camera. Our Kinect-based rehabilitation game “JewelMine” consists of a set of static balance training exercises which encourage the players to reach out of their base of support. We plan to demonstrate a sophisticated post-session analysis tool and several content themes which can be changed dynamically during a therapy session.","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134033075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroki Omosako, Asako Kimura, F. Shibata, H. Tamura
{"title":"Shape-COG Illusion: Psychophysical influence on center-of-gravity perception by mixed-reality visual stimulation","authors":"Hiroki Omosako, Asako Kimura, F. Shibata, H. Tamura","doi":"10.1109/VR.2012.6180884","DOIUrl":"https://doi.org/10.1109/VR.2012.6180884","url":null,"abstract":"Mixed reality (MR) is a technology that merges real and virtual worlds in real time. In MR environments, visual appearance of a real object can be changed by superimposing a virtual object on it. In this study, we focus on the center-of-gravity (COG) and verify the influence of MR visual stimulation on the COG in MR environments. This paper describes the systematic experiments performed to study the influence. The results obtained are interesting: (1) the presence of COG can be changed by MR visual stimulation; (2) although COG differs in vision and force, the presence of COG can be represented by MR visual stimulation under certain conditions; (3) COG perception can also be changed by varying the mass of the real object. We named this psychophysical influence the “Shape-COG Illusion.”","PeriodicalId":220761,"journal":{"name":"2012 IEEE Virtual Reality Workshops (VRW)","volume":"379 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131999552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}