{"title":"Synthesizing contact sounds between textured models","authors":"Zhimin Ren, Hengchin Yeh, M. Lin","doi":"10.1109/VR.2010.5444799","DOIUrl":"https://doi.org/10.1109/VR.2010.5444799","url":null,"abstract":"We present a new interaction handling model for physics-based sound synthesis in virtual environments. A new three-level surface representation for describing object shapes, visible surface bumpiness, and microscopic roughness (e.g. friction) is proposed to model surface contacts at varying resolutions for automatically simulating rich, complex contact sounds. This new model can capture various types of surface interaction, including sliding, rolling, and impact with a combination of three levels of spatial resolutions. We demonstrate our method by synthesizing complex, varying sounds in several interactive scenarios and a game-like virtual environment. The three-level interaction model for sound synthesis enhances the perceived coherence between audio and visual cues in virtual reality applications.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129543634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sound synthesis and evaluation of interactive footsteps for virtual reality applications","authors":"R. Nordahl, S. Serafin, L. Turchet","doi":"10.1109/VR.2010.5444796","DOIUrl":"https://doi.org/10.1109/VR.2010.5444796","url":null,"abstract":"A system to synthesize in real-time the sound of footsteps on different materials is presented. The system is based on microphones which allow the user to interact with his own footwear. This solution distinguishes our system from previous efforts that require specific shoes enhanced with sensors. The microphones detect real footsteps sounds from users, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Evaluations of the system in terms of sound validity and fidelity of interaction are described.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125891457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time continuum grass","authors":"Kan Chen, H. Johan","doi":"10.1109/VR.2010.5444785","DOIUrl":"https://doi.org/10.1109/VR.2010.5444785","url":null,"abstract":"Simulating grass field in real-time has many applications, such as in virtual reality and games. Modeling accurate grass-grass, grass-object and grass-wind interactions requires a high computational cost. In this paper, we present a method to simulate grass field in real-time by considering grass field as a two dimensional grid-based continuum and shifting the complex interactions to the dynamics of continuum. We adopt the wave simulation as the numerical model for the dynamics of continuum which represents grass-grass interaction. We propose a procedural approach to handle grass-object and grass-wind interactions as external force that updates the wave simulation. The proposed method can be efficiently implemented on a GPU. As a result, massive amounts of grass can interact with moving objects and wind in real-time.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122342054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulating soft tissues using a GPU approach of the mass-spring model","authors":"Christian Andres Diaz Leon, S. Eliuk, H. Trefftz","doi":"10.1109/VR.2010.5444775","DOIUrl":"https://doi.org/10.1109/VR.2010.5444775","url":null,"abstract":"The recent advances in the fields such as modeling bio-mechanics of living tissues, haptic technologies, computational capacity, and graphics realism have created conditions necessary in order to develop effective surgical training using virtual environments. However, virtual simulators need to meet two requirements, they need to be real-time and highly realistic. The most expensive computational task in a surgical simulator is that of the physical model. The physical model is the component responsible to simulate the deformation of the anatomical structures and the most important factor in order to obtain realism. In this paper we present a novel approach to virtual surgery. The novelty comes in two forms: specifically a highly realistic mass-spring model, and a GPU based technique, and analysis, that provides a nearly 80x speedup over serial execution and 20x speedup over CPU based parallel execution.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128088079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen D. O'Connell, Magnus Axholt, M. Cooper, S. Ellis
{"title":"Detection thresholds for label motion in visually cluttered displays","authors":"Stephen D. O'Connell, Magnus Axholt, M. Cooper, S. Ellis","doi":"10.1109/VR.2010.5444788","DOIUrl":"https://doi.org/10.1109/VR.2010.5444788","url":null,"abstract":"While label placement algorithms are generally successful in managing visual clutter by preventing label overlap, they can also cause significant label movement in dynamic displays. This study investigates motion detection thresholds for various types of label movement in realistic and complex virtual environments, which can be helpful for designing less salient and disturbing algorithms. Our results show that label movement in stereoscopic depth is shown to be less noticeable than similar lateral monoscopic movement, inherent to 2D label placement algorithms. Furthermore, label movement can be introduced more readily into the visual periphery (over 15° eccentricity) because of reduced sensitivity in this region. Moreover, under the realistic viewing conditions that we used, motion of isolated labels is more easily detected than that of overlapping labels. This perhaps counterintuitive finding may be explained by visual masking due to the visual clutter arising from the label overlap. The quantitative description of the findings presented in this paper should be useful not only for label placement applications, but also for any cluttered AR or VR application in which designers wish to control the users' visual attention, either making text labels more or less noticeable as needed.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130387147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual and tactile information to improve drivers' performance","authors":"Shin'ichi Onimaru, M. Kitazaki","doi":"10.1109/VR.2010.5444759","DOIUrl":"https://doi.org/10.1109/VR.2010.5444759","url":null,"abstract":"Usually we steer a car using mainly visual information to perceive road's shape and bends. We developed a driving simulator with visual and/or tactile information guides to virtually present drivers' lateral position and to enhance their steering performance. The purpose of this study was to test effects of the cross-modal guide information on the driving performance. We found that the tactile guide improved driving accuracy more than the visual guide without any tradeoff of driving loads. Thus, the tactile information of virtual position of a car is useful for assisting and improving driver's performance with fewer loads.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"82 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131672964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Johnsen, B. Rossen, Diane Beck, Benjamin C. Lok, D. Lind
{"title":"Show some respect! The impact of technological factors on the treatment of virtual humans in conversational training systems","authors":"K. Johnsen, B. Rossen, Diane Beck, Benjamin C. Lok, D. Lind","doi":"10.1109/VR.2010.5444769","DOIUrl":"https://doi.org/10.1109/VR.2010.5444769","url":null,"abstract":"Understanding the human-computer interface factors that influence users' behavior with virtual humans will enable more effective human-virtual human encounters. This paper presents evidence of a significant relationship between behavioral indicators of respect by users and virtual reality technology factors. Moreover, we found this evidence in an application domain where respect for others is fundamentally important, health professional education.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130622165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Streaming 3D shape deformations in collaborative virtual environment","authors":"Ziying Tang, Guodong Rong, X. Guo, B. Prabhakaran","doi":"10.1109/VR.2010.5444793","DOIUrl":"https://doi.org/10.1109/VR.2010.5444793","url":null,"abstract":"Collaborative virtual environment has been limited on static or rigid 3D models, due to the difficulties of real-time streaming of large amounts of data that is required to describe motions of 3D deformable models. Streaming shape deformations of complex 3D models arising from a remote user's manipulations is a challenging task. In this paper, we present a framework based on spectral transformation that encodes surface deformations in a frequency format to successfully meet the challenge, and demonstrate its use in a distributed virtual environment. Our research contributions through this framework include: i) we reduce the data size to be streamed for surface deformations since we stream only the transformed spectral coefficients and not the deformed model; ii) we propose a mapping method to allow models with multi-resolutions to have the same deformations simultaneously; iii) our streaming strategy can tolerate loss without the need for special handling of packet loss. Our system guarantees real-time transmission of shape deformations and ensures the smooth motions of 3D models. Moreover, we achieve very effective performance over real Internet conditions as well as a local LAN. Experimental results show that we get low distortion and small delays even when surface deformations of large and complicated 3D models are streamed over lossy networks.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122556784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Severi Uusitalo, Peter Eskolin, Yu You, P. Belimpasakis
{"title":"An extensible mirror world from user-generated content","authors":"Severi Uusitalo, Peter Eskolin, Yu You, P. Belimpasakis","doi":"10.1109/VR.2010.5444751","DOIUrl":"https://doi.org/10.1109/VR.2010.5444751","url":null,"abstract":"In this paper we describe a system for creating a navigable mirror world, utilizing community photographs of real life environments. We present the essential architecture and a prototype solution for not only geotagging, but also spatially structuring content. Mash-up interfaces are available towards 3rd parties, for linking and georeferencing their content.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123224227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Influence of tactile feedback and presence on egocentric distance perception in virtual environments","authors":"F. Ahmed, Joseph D. Cohen, K. Binder, C. Fennema","doi":"10.1109/VR.2010.5444791","DOIUrl":"https://doi.org/10.1109/VR.2010.5444791","url":null,"abstract":"A number of studies have reported that distance judgments are underestimated in virtual environments (VE) when compared to those made in the real world. Studies have also reported that providing users with visual feedback in the VE improves their distance perception and made them feel more immersed in the virtual world. In this study, we investigated the effect of tactile feedback and visual manipulation of the VE on egocentric distance perception. In contrast to previous studies which have focused on task-specific and error-corrective feedback (for example, providing knowledge about the errors in distance estimations), we demonstrate that exploratory feedback is sufficient for reducing errors in distance estimation. In Experiment 1, the effects of different types of feedback (visual, tactile and visual plus tactile) on distance judgments were studied. Tactile feedback was given to participants as they explored and touched objects in a VE. Results showed that distance judgments improved in the VE regardless of the type of sensory feedback provided. In Experiment 2, we presented a real world environment to the participants and then situated them in a VE that was either a replica or an altered representation of the real world environment. Results showed that participants made significant underestimation in their distance judgments when the VE was not a replica of the physical space. We further found that providing both visual and tactile feedback did not reduce distance compression in such a situation. These results are discussed in the light of the nature of feedback provided and how assumptions about the VE may affect distance perception in virtual environments.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131616389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}