K. Hirota, Seichiro Ebisawa, Tomohiro Amemiya, Y. Ikei
{"title":"A theater for viewing and editing multi-sensory content","authors":"K. Hirota, Seichiro Ebisawa, Tomohiro Amemiya, Y. Ikei","doi":"10.1109/ISVRI.2011.5759643","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759643","url":null,"abstract":"This research has been carried out as part of a multi-sensory theater project that aims at establishing technology to integrate a range of sensations such as visual, audio, force, tactile, vestibular, and odor into passive and interactive media content and communications. This paper describes the approaches that are being examined and the current status of the project. As a platform for the experiments, a prototype version of a multi-sensory theater has been implemented. The theater is equipped with devices that present wind and olfactory sensations. The sensation of wind is generated by both computer-controlled fans and air nozzles connected to a source of compressed air, and the olfactory sensation is presented by emitting odorants into the air. To facilitate the creation of content, a framework for editing multi-sensory information was constructed, in which all of the devices were connected to and controlled by a sequencer based on a MIDI interface.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116442987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hand posture recognition using shape decomposition","authors":"Junyeong Choi, Jong-Il Park","doi":"10.1109/ISVRI.2011.5759672","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759672","url":null,"abstract":"This paper proposes a vision-based hand posture recognition method. Our method does not require any database but can recognize tiny motion changes of fingers by applying shape decomposition to hand posture recognition.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125037515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An engine of virtual reality mixing environment based on real-time modeling and interaction","authors":"Zhang Shujun","doi":"10.1109/ISVRI.2011.5759621","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759621","url":null,"abstract":"Mixed reality (MR) is a further research hotspot with the development of virtual reality (VR). Seamless fusion and real-time interaction between virtual environment and real users or objects are two most important problems urgent to be solved for a MR system. In this paper an engine of virtual reality mixing environment E-VRME is proposed based on real-time modeling and interaction and three applications are demonstrated. The engine runs on our self-built hardware platform DreamWorld and has three modules. Graphics module is designed for VE rendering, modeling module for 3D reconstruction and V-R interaction module for collision detection and feedback. An octree-based visual hull modeling method is used for constructing the mesh models of all the users from their multi-view images. Then, marching cubes algorithm is applied for triangulation and texture mapping is carried out. E-VRME integrates virtual and real information into a seamless unit and it supports any kinds of real-time MR applications. Experimental results and three different application systems proved its usefulness and effectiveness.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129667546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hierarchical Voronoi diagram-based path planning among polygonal obstacles for 3D virtual worlds","authors":"Xiaoting Wang, Chenglei Yang, Jiaye Wang, Xiangxu Meng","doi":"10.1109/ISVRI.2011.5759626","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759626","url":null,"abstract":"Path planning is one of important problems in virtual reality, computational geometry, robotic and GIS. This paper presents two methods of computing walkthrough paths in a polygon with h holes and a total of n vertices, for 3D virtual worlds such as virtual museums and games. For any two points s and t, one method is presented to compute the shortest Voronoi skeleton path SVSP(s,t) in O(k + hlogh + logn) time, where k is the number of the Voronoi edges in SVSP(s,t) and O(k) ≤ O(n). An approximate shortest path from s to t can be computed in O(k + hlogh + logn) time based on SVSP(s,t). The other is to compute the shortest path SP(s,t) in O(m + hlog(n/h) + h2 logh) time, where m is the number of the edges in SP(s,t) and m ≤ n. They are based on a hierarchical Voronoi diagram presented in this paper, which can be constructed in time O(nlogn) with O(n) space in the preprocessing stage. This data structure also can be used to fast solve visibility information, collision detection and other problems in 3D virtual worlds. The methods are simpler and faster than other existed methods, with lower space.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115723609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Study on the virtual natural landscape walkthrough by using Unity 3D","authors":"K. Yang, Jiang Jie, Shen Haihui","doi":"10.1109/ISVRI.2011.5759642","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759642","url":null,"abstract":"This paper describes the use of this 3D virtual reality engine of making natural landscape walkthrough. It explains the methods of making the sky, topography, trees and flowers, water of nature common things in Unity 3D, and expound Unity 3D software has quick and convenient advantage in making of virtual roaming, at last, it emphaise the pursuit for realistic scene is more important than the scene optimization under the condition of current level of computer.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115941664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sven Strothoff, D. Feldmann, Frank Steinicke, Tom Vierjahn, Sina Mostafawy
{"title":"Interactive generation of virtual environments using MUAVs","authors":"Sven Strothoff, D. Feldmann, Frank Steinicke, Tom Vierjahn, Sina Mostafawy","doi":"10.1109/ISVRI.2011.5759608","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759608","url":null,"abstract":"In this paper we introduce a novel approach to interactively generate a virtual environment (VE) as reconstruction of the real world by using a swarm of autonomously flying miniature unmanned aerial vehicles (MUAVs). In our system each of the MUAVs is equipped with advanced network as well as different sensor technologies, which allow to capture images from low altitudes and to transfer those to a ground station, where the captured environment is reconstructed. At the ground station, the virtual 3D reconstruction is displayed in different virtual reality (VR) environments in which operators can not only explore the VE, but also interactively control the 3D reconstruction process. Therefore, virtual representations of the real MUAVs are visually integrated in the VE according to their real world position and orientation. The operator can manually steer these virtual MUAVs leading to corresponding movements of their real world counterparts. In this paper we introduce the AVIGLE project. In particular, we describe the 3D reconstruction process as well as the rendering of the large amount of data collected by the MUAVs. Furthermore, we outline the VR setup, and explain in detail how operators can interact with the virtual 3D reconstruction and the visual representation of the MUAVs.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131371950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Internet-wide multi-party tele-immersion framework for remote 3D collaboration","authors":"Zhong Zhou, Xiuwen Chen, Lin Zhang, Xuefeng Chang","doi":"10.1109/ISVRI.2011.5759628","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759628","url":null,"abstract":"Tele-presence and tele-immersion systems are growing to be more realistic these years. However, few show practical experiences in Internet wide tele-immersion, especially in acceptable end-to-end delay. We present a framework for multi-party tele-immersion applications that allow remote collaborative 3D interactions. This framework performs real-time stereo modeling based on the bumblebee cameras. It provides an architecture that supports several tele-immersion nodes to communicate in groups. This paper addresses how to reconstruct, compress and establish the system. The system has been set up in China NGI. Preliminary experiment results show that it's possible to have the delay under 100ms with 320*240 depth map and 640*480 texture images. The framework is also expected to be further extended for interaction with full-body reconstruction.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121875029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation and 3D visualization of oil spill on the sea","authors":"Feng Yu, Yong Yin","doi":"10.1109/ISVRI.2011.5759636","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759636","url":null,"abstract":"For perfecting the oil spill simulation view system of marine search and rescue simulator, based on the tidal current numerical calculation, considering oil movement mathematical models, designing a collision detection algorithm between oil and island, using texture rendering and plane refraction technologies of Open Scene Graph, finally the simulation and 3D visualization of oil spill on the sea are realized in real-time. The visualization effects are realistic. This algorithm has a practical value for predicting the trajectory of offshore oil spill.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"1996 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125574204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Super thin 3D form display for multimodal user experiences using vertically deformation of leaf spring and SMA","authors":"Yusuke Nakagawa, S. Yonekura, Y. Kawaguchi","doi":"10.1109/ISVRI.2011.5759603","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759603","url":null,"abstract":"In this paper, we present super-thin 3D form display as visual and haptics multimodal interface. In contrast to the existing 3D form displays and surfaces requiring huge and complicated system, we developed super-thin 3D form display. We achieved its thickness by using our proposed flat mechanical system that consists of a SMA-coil and a single leaf spring. The thickness of this system is below 1.0 mm and this system can generate 30.0 mm stroke. Furthermore, it can lift an object of 50 g at 30.0 mm and can withstand 100 g load at the top. This super-thin 3D form display using this system has no precedent and is expected to be used for entertainment and digital signage. In addition to this, our proposed flat system provides a basis for the future ubiquitous and portable mechanical man-machine interfaces.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125712802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keisuke Takeshita, Kouichi Watanabe, Takumi Yoshida, K. Minamizawa, S. Tachi
{"title":"Transmission of existence by retro reflective projection technology using handheld projector","authors":"Keisuke Takeshita, Kouichi Watanabe, Takumi Yoshida, K. Minamizawa, S. Tachi","doi":"10.1109/ISVRI.2011.5759605","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759605","url":null,"abstract":"Our research aims to enable persons to communicate remotely with numerous participants while moving around freely in public spaces. To achieve this goal, we developed a novel remote communication system based on a mutual telexistence master-slave system. In this paper, we propose a system that transmits the operator's sense of existence to the participant using retro-reflective projection technology and a handheld projector, which solves the problem of a limited sense of the surroundings. Although stereoscopic presentation by motion parallax did not occur in this study, this system was expected to give the participant a stereoscopic view via the surrogate robot using a handheld projector.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131786888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}