{"title":"A simplification architecture for exploring navigation tradeoffs in mobile VR","authors":"Carlos D. Correa, I. Marsic","doi":"10.1109/VR.2004.6","DOIUrl":"https://doi.org/10.1109/VR.2004.6","url":null,"abstract":"Interactive applications on mobile devices often reduce data fidelity to adapt to resource constraints and variable user preferences. In virtual reality applications, the problem of reducing scene graph fidelity can be stated as a combinatorial optimization problem, where a part of the scene graph with maximum fidelity is chosen such that the resources it requires are below a given threshold and the hierarchical relationships are maintained. The problem can be formulated as a variation of the tree knapsack problem, which is known to be NP-hard. For this reason, solutions to this problem result in a tradeoff that affects user navigation. On one hand, exact solutions provide the highest fidelity but may take long time to compute. On the other hand, greedy solutions are fast but lack high fidelity. We present a simplification architecture that allows the exploration of such navigation tradeoffs. This is achieved by a formulating the problem in a generic way and developing software components that allow the dynamic selection of algorithms and constraints. The experimental results show that the architecture is flexible and supports dynamic reconfiguration.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115059453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MVL toolkit: software library for constructing an immersive shared virtual world","authors":"T. Ogi, T. Kayahara, T. Yamada, M. Hirose","doi":"10.1109/VR.2004.54","DOIUrl":"https://doi.org/10.1109/VR.2004.54","url":null,"abstract":"In this study, we investigated various functions that are required in an immersive shared virtual world, and then developed the MVL toolkit to implement these functions. The MVL toolkit contains several utilities that enable such functions as sharing space, sharing users, sharing operations, sharing information and sharing time. By using the MVL toolkit, collaborative virtual reality applications can be easily constructed by extending existing stand-alone application programs.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116571529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroyuki Maeda, T. Tanikawa, J. Yamashita, K. Hirota, M. Hirose
{"title":"Real world video avatar: transmission and presentation of human figure","authors":"Hiroyuki Maeda, T. Tanikawa, J. Yamashita, K. Hirota, M. Hirose","doi":"10.1109/VR.2004.64","DOIUrl":"https://doi.org/10.1109/VR.2004.64","url":null,"abstract":"Video avatar (Ogi et al., 2001) is one methodology of interaction with people at a remote location. By using such video-based real-time human figures, participants can interact using nonverbal information such as gestures and eye contact. In traditional video avatar interaction, however, participants can interact only in \"virtual\" space. We have proposed the concept of a \"real-world video avatar\", that is, the concept of video avatar presentation in \"real\" space. One requirement of such a system is that the presented figure must be viewable from various directions, similarly to a real human. In this paper such a view is called \"multiview\". By presenting a real-time human figure with \"multiview\", many participants can interact with the figure from all directions, similarly to interaction in the real world. A system that supports \"multiview\" was proposed by Endo et al. (2000), however, this system cannot show real-time images. We have developed a display system which supports \"multiview\" (Maeda et al., 2002). In this paper, we discuss the evaluation of real-time presentation using the display system.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131407151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tracker calibration using tetrahedral mesh and tricubic spline models of warp","authors":"C. Borst","doi":"10.1109/VR.2004.79","DOIUrl":"https://doi.org/10.1109/VR.2004.79","url":null,"abstract":"This paper presents a three-level tracker calibration system that greatly reduces errors in tracked position and orientation. The first level computes an error-minimizing rigid body transform that eliminates the need for precise alignment of a tracker base frame. The second corrects for field warp by interpolating correction values stored with vertices in a tetrahedrization of warped space. The third performs an alternative field warp calibration by interpolating corrections in the parameter space of a tricubic spline model of field warp. The system is evaluated for field warp calibration near a passive-haptic panel in both low-warp and high-warp environments. The spline method produces the most accurate results, reducing median position error by over 90% and median orientation error by over 80% when compared to the use of only a rigid body transform.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130437856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigation with place representations and visible landmarks","authors":"Jeffrey S. Pierce, R. Pausch","doi":"10.1109/VR.2004.55","DOIUrl":"https://doi.org/10.1109/VR.2004.55","url":null,"abstract":"Existing navigation techniques do not scale well to large virtual worlds. We present a new technique, navigation with place representations and visible landmarks that scales from town-sized to planet-sized worlds. Visible landmarks make distant landmarks visible and allow users to travel relative to those landmarks with a single gesture. Actual and symbolic place representations allow users to detect and travel to more distant locations with a small number of gestures. The world's semantic place hierarchy determines which visible landmarks and place representations users can see at any point in time. We present experimental results demonstrating that our technique allows users to navigate more efficiently than a modified panning and zooming W1M, completing within-place navigation tasks 22% faster and between-place tasks 38% faster on average.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130482339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Projector-based dual-resolution stereoscopic display","authors":"G. Godin, Jean-François Lalonde, L. Borgeat","doi":"10.1109/VR.2004.63","DOIUrl":"https://doi.org/10.1109/VR.2004.63","url":null,"abstract":"We present a stereoscopic display system which incorporates a high-resolution inset image, or fovea. We describe the specific problem of false depth cues along the boundaries of the inset image, and propose a solution in which the boundaries of the inset image are dynamically adapted as a function of the geometry of the scene. This method produces comfortable stereoscopic viewing at a low additional computational cost. The four projectors need only be approximately aligned: a single drawing pass is required, regardless of projector alignment, since the warping is applied as part of the 3D rendering process.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126868073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yusuke Tomozoe, Takashi Machida, K. Kiyokawa, H. Takemura
{"title":"Unified gesture-based interaction techniques for object manipulation and navigation in a large-scale virtual environment","authors":"Yusuke Tomozoe, Takashi Machida, K. Kiyokawa, H. Takemura","doi":"10.1109/VR.2004.81","DOIUrl":"https://doi.org/10.1109/VR.2004.81","url":null,"abstract":"Manipulation of virtual objects and navigation are common operations in a large-scale virtual environment. In this paper, we propose a few gesture-based interaction techniques that can be used for both object manipulation and navigation. Unlike existing methods, our techniques enable a user to perform these two types of operations flexibly with a little practice in identical interaction manners by introducing a movability property attached to every virtual object.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaoru Sugita, Keita Takahashi, T. Naemura, H. Harashima
{"title":"Focus measurement on programmable graphics hardware for all in-focus rendering from light fields","authors":"Kaoru Sugita, Keita Takahashi, T. Naemura, H. Harashima","doi":"10.1109/VR.2004.39","DOIUrl":"https://doi.org/10.1109/VR.2004.39","url":null,"abstract":"This paper deals with a method for interactive rendering of photorealistic images, which is a fundamental technology in the field of virtual reality. Since the latest graphics processing units (GPUs) are programmable, they are expected to be useful for various applications including numerical computation and image processing. This paper proposes a method for focus measurement on light field rendering using a GPU as a fast processing unit for image processing and image-based rendering. It is confirmed that the proposed method enables interactive all in-focus rendering from light fields. This is because the latest DirectX 9 generation GPUs are much faster than CPUs in solving optimization problems, and a GPU implementation can eliminate the latency for data transmission between video memory and system memory. Experimental results show that the GPU implementation outperforms its CPU implementation.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129400650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}