G. Marino, P. Gasparello, D. Vercelli, F. Tecchia, M. Bergamasco
{"title":"Network streaming of dynamic 3D content with on-line compression of frame data","authors":"G. Marino, P. Gasparello, D. Vercelli, F. Tecchia, M. Bergamasco","doi":"10.1109/VR.2010.5444762","DOIUrl":"https://doi.org/10.1109/VR.2010.5444762","url":null,"abstract":"Real-time 3D content distribution over a network requires facing several challenges, most notably the handling of the large amount of data usually associated with 3D meshes. The scope of the present paper falls within the well-established context of real-time capture and streaming of OpenGL command sequences, focusing in particular on data compression schemes. However, we advance beyond the state-of-the-art improving over previous attempts of “in-frame” geometric compression on 3D structures inferred from generic OpenGL command sequences and adding “inter-frame” redundancy exploitation of the traffic generated by the typical architecture of interactive applications.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114336069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enabling functional validation of virtual cars through Natural Interaction metaphors","authors":"Mathias Moehring, B. Fröhlich","doi":"10.1109/VR.2010.5444819","DOIUrl":"https://doi.org/10.1109/VR.2010.5444819","url":null,"abstract":"Natural Interaction in virtual environments is a key requirement for the virtual validation of functional aspects in automotive product development processes. Natural Interaction is the metaphor people encounter in reality: the direct manipulation of objects by their hands. To enable this kind of Natural Interaction, we propose a pseudo-physical metaphor that is both plausible enough to provide realistic interaction and robust enough to meet the needs of industrial applications. Our analysis of the most common types of objects in typical automotive scenarios guided the development of a set of refined grasping heuristics to support robust finger-based interaction of multiple hands and users. The objects' behavior in reaction to the users' finger motions is based on pseudo-physical simulations, which also take various types of constrained objects into account. In dealing with real-world scenarios, we had to introduce the concept of Normal Proxies, which extend objects with appropriate normals for improved grasp detection and grasp stability. An expert review revealed that our interaction metaphors allow for an intuitive and reliable assessment of several functionalities of objects found in a car interior.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115749116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mixed reality in virtual world teleconferencing","authors":"Tuomas Kantonen, Charles Woodward, Neil Katz","doi":"10.1109/VR.2010.5444792","DOIUrl":"https://doi.org/10.1109/VR.2010.5444792","url":null,"abstract":"In this paper we present a Mixed Reality (MR) teleconferencing application based on Second Life (SL) and the OpenSim virtual world. Augmented Reality (AR) techniques are used for displaying virtual avatars of remote meeting participants in real physical spaces, while Augmented Virtuality (AV), in form of video based gesture detection, enables capturing of human expressions to control avatars and to manipulate virtual objects in virtual worlds. The use of Second Life for creating a shared augmented space to represent different physical locations allows us to incorporate the application into existing infrastructure. The application is implemented using open source Second Life viewer, ARToolKit and OpenCV libraries.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123503838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effect based scene manipulation for multimodal VR systems","authors":"Matthias Haringer, Steffi Beckhaus","doi":"10.1109/VR.2010.5444814","DOIUrl":"https://doi.org/10.1109/VR.2010.5444814","url":null,"abstract":"Games use high quality graphics and pre-crafted visual and auditive effects to create user experiences. Virtual environments offer new navigation and interaction methods, immersive installations, and haptic and olfactoric output. We introduce an extension to VR systems, which makes it possible to use a wide range of multimodal effects from gaming and VR to be activated and modified on a per object basis at runtime. To access, manipulate, and add effects to the objects of a scene intuitively, our extension realizes an abstract, hierarchical scene concept using multimodal objects. Multiple effects can be added to each object and the parameters of each effect can be manipulated online. Fading of effects and bundling of multiple effects for multiple objects are more advanced features of the system.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122314853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yusuke Doyama, Atsushi Kodama, T. Tanikawa, K. Tagawa, K. Hirota, M. Hirose
{"title":"CAGRA: Optimal parameter setting and rendering method for occlusion-capable automultiscopic three-dimensional display","authors":"Yusuke Doyama, Atsushi Kodama, T. Tanikawa, K. Tagawa, K. Hirota, M. Hirose","doi":"10.1109/VR.2010.5444772","DOIUrl":"https://doi.org/10.1109/VR.2010.5444772","url":null,"abstract":"We have developed a novel automultiscopic display, CAGRA (Computer-Aided Graphical Real-world Avatar), which can provide full parallax both horizontally and vertically without imposing any other additional equipment such as goggles on users. CAGRA adopts two axes of rotation so that it can distribute reflected light all around. In our previous work, it was proved that the display can present a black and white image of a 3D object at 1Hz with both horizontal and vertical parallax. Here, we discuss two important problems that remain unsolved in our previous work: optimal parameter setting and the rendering method. As for finding optimal parameters, the relationship among the rotation speed of two axes and the diffusion angle of the holographic diffuser is discussed. The rendering method was implemented so that accurate view is presented in spite of the unique mechanical structure of CAGRA. Furthermore, the synchronization mechanism between projection of images and rotation of the mirror, which was also a remaining problem, is also implemented. Findings obtained through solving these problems clarify the characteristics of this novel display system, CAGRA, and lead to its further development.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121079860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Context sensitive interaction interoperability for distributed virtual environments","authors":"Hussein M. Ahmed, D. Gračanin, Peter J. Radics","doi":"10.1109/VR.2010.5444778","DOIUrl":"https://doi.org/10.1109/VR.2010.5444778","url":null,"abstract":"User interactions and related input devices and techniques can be customized to improve user experience and task performances. We consider context and context awareness to help modify applications interface and interactions in order to match the tasks users are currently performing. We explore how to adapt to the context by selecting input device and interaction technique. Finally, we present the framework developed to realize these affordances and discuss the addressed challenges.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133766046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mixed reality in the loop — design process for interactive mechatronical systems","authors":"Jörg Stöcklein, C. Geiger, V. Paelke","doi":"10.1109/VR.2010.5444755","DOIUrl":"https://doi.org/10.1109/VR.2010.5444755","url":null,"abstract":"Mixed reality techniques have high potential to support the development of complex systems that operate in a real world environment, especially mechatronic systems. In our paper we present the Mixed Reality in the Loop design process that enables a seamless progression from an initial virtual prototype to the final system along the mixed reality continuum.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132724389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nate Hagbi, R. Grasset, Oriel Bergig, M. Billinghurst, Jihad El-Sana
{"title":"In-Place Sketching for content authoring in Augmented Reality games","authors":"Nate Hagbi, R. Grasset, Oriel Bergig, M. Billinghurst, Jihad El-Sana","doi":"10.1109/VR.2010.5444806","DOIUrl":"https://doi.org/10.1109/VR.2010.5444806","url":null,"abstract":"Sketching leverages human skills for various purposes. In-Place Augmented Reality Sketching experiences build on the intuitiveness and flexibility of hand sketching for tasks like content creation. In this paper we explore the design space of In-Place Augmented Reality Sketching, with particular attention to content authoring in games. We propose a contextual model that offers a framework for the exploration of this design space by the research community. We describe a sketch-based AR racing game we developed to demonstrate the proposed model. The game is developed on top of our shape recognition and 3D registration library for mobile AR.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133613400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulating hearing loss in virtual training","authors":"Amy Sadek, D. Krum, M. Bolas","doi":"10.1109/VR.2010.5444757","DOIUrl":"https://doi.org/10.1109/VR.2010.5444757","url":null,"abstract":"Audio systems for virtual reality and augmented reality training environments commonly focus on high-quality audio reproduction. Yet many trainees may face real-world situations where hearing is compromised. In these cases, the hindrance caused by impaired or lost hearing is a significant stressor that may affect performance. Because this phenomenon is hard to simulate without actually causing hearing damage, trainees are largely unpracticed at operating with diminished hearing. To improve the match between training scenarios and the real-world situation, this effort aims to add simulated hearing loss or impairment as a training variable. Stated briefly, the goal is to effect everything users hear — including non-simulated sounds such as their own and each other's voices — without damaging their hearing, being overtly noticeable, or requiring the donning of headphones.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"380 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132523551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GPU implementation of 3D object selection by conic volume techniques in virtual environments","authors":"T. Rick, Anette von Kapri, T. Kuhlen","doi":"10.1109/VR.2010.5444783","DOIUrl":"https://doi.org/10.1109/VR.2010.5444783","url":null,"abstract":"In this paper we present a GPU implementation to accurately select 3D objects based on their silhouettes by a pointing device with six degrees of freedom (6DOF) in a virtual environment (VE). We adapt a 2D picking metaphor to 3D selection in VE's by changing the projection and view matrices according to the position and orientation of a 6DOF pointing device and rendering a conic selection volume to an off-screen pixel buffer. This method works for triangulated as well as volume rendered objects, no explicit geometric representation is required.","PeriodicalId":151060,"journal":{"name":"2010 IEEE Virtual Reality Conference (VR)","volume":"104 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113983633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}