E. Corseuil, A. Raposo, Romano J. M. da Silva, Marcio H. G. Pinto, G. Wagner, M. Gattass
{"title":"ENVIRON - Visualization of CAD Models In a Virtual Reality Environment","authors":"E. Corseuil, A. Raposo, Romano J. M. da Silva, Marcio H. G. Pinto, G. Wagner, M. Gattass","doi":"10.2312/EGVE/EGVE04/079-082","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/079-082","url":null,"abstract":"This paper presents ENVIRON (ENvironment for VIRtual Objects Navigation), an application that was developed motivated by the necessity of using Virtual Reality in large industrial engineering models coming from CAD (Computer Aided Design) tools. This work analyzes the main problems related to the production of a VR model, derived from the CAD model.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115684996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Changes in Navigational Behaviour Produced by a Wide Field of View and a High Fidelity Visual Scene","authors":"S. Lessels, R. Ruddle","doi":"10.2312/EGVE/EGVE04/071-078","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/071-078","url":null,"abstract":"The difficulties people frequently have navigating in virtual environments (VEs) are well known. Usually these difficulties are quantified in terms of performance (e.g., time taken or number of errors made in following a path), with these data used to compare navigation in VEs to equivalent real-world settings. However, an important cause of any performance differences is changes in people's navigational behaviour. This paper reports a study that investigated the effect of visual scene fidelity and field of view (FOV) on participants' behaviour in a navigational search task, to help identify the thresholds of fidelity that are required for efficient VE navigation. With a wide FOV (144 degrees), participants spent significantly larger proportion of their time travelling through the VE, whereas participants who used a normal FOV (48 degrees) spent significantly longer standing in one place planning where to travel. Also, participants who used a wide FOV and a high fidelity scene came significantly closer to conducting the search \"perfectly\" (visiting each place once). In an earlier real-world study, participants completed 93% of their searches perfectly and planned where to travel while they moved. Thus, navigating a high fidelity VE with a wide FOV increased the similarity between VE and real-world navigational behaviour, which has important implications for both VE design and understanding human navigation. \u0000 \u0000Detailed analysis of the errors that participants made during their non-perfect searches highlighted a dramatic difference between the two FOVs. With a narrow FOV participants often travelled right past a target without it appearing on the display, whereas with the wide FOV targets that were displayed towards the sides of participants overall FOV were often not searched, indicating a problem with the demands made by such a wide FOV display on human visual attention.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131561331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Real-Time System for Full Body Interaction with Virtual Worlds","authors":"Jean-Marc Hasenfratz, M. Lapierre, F. Sillion","doi":"10.2312/EGVE/EGVE04/147-156","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/147-156","url":null,"abstract":"Real-time video acquisition is becoming a reality with the most recent camera technology. Three-dimensional models can be reconstructed from multiple views using visual hull carving techniques. However the combination of these approaches to obtain a moving 3D model from simultaneous video captures remains a technological challenge. In this paper we demonstrate a complete system architecture allowing the real-time (≤ 30 fps) acquisition and full-body reconstruction of one or several actors, which can then be integrated in a virtual environment. A volume of approximately 2m3 is observed with (at least) four video cameras and the video fluxes are processed to obtain a volumetric model of the moving actors. The reconstruction process uses a mixture of pipelined and parallel processing, using N individual PCs for N cameras and a central computer for integration, reconstruction and display. A surface description is obtained using a marching cubes algorithm. We discuss the overall architecture choices, with particular emphasis on the real-time constraint and latency issues, and demonstrate that a software synchronization of the video fluxes is both sufficient and efficient. The ability to reconstruct a full-body model of the actors and any additional props or elements opens the way for very natural interaction techniques using the entire body and real elements manipulated by the user, whose avatar is immersed in a virtual world.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132162962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Saccadic Suppression to Hide Graphic Updates","authors":"J. Schumacher, R. Allison, R. Herpers","doi":"10.2312/EGVE/EGVE04/017-024","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/017-024","url":null,"abstract":"In interactive graphics it is often necessary to introduce large changes in the image in response to updated information about the state of the system. Updating the local state immediately would lead to a sudden transient change in the image, which could be perceptually disruptive. However, introducing the correction gradually using smoothing operations increases latency and degrades precision. It would be beneficial to be able to introduce graphic updates immediately if they were not perceptible. In the paper the use of saccade-contingent updates is exploited to hide graphic updates during the period of visual suppression that accompanies a rapid, or saccadic, eye movement. \u0000 \u0000Sensitivity to many visual stimuli is known to be reduced during a change in fixation compared to when the eye is still. For example, motion of a small object is harder to detect during a rapid eye movement (saccade) than during a fixation. To evaluate if these findings generalize to large scene changes in a virtual environment, gaze behavior in a 180 degree hemispherical display was recorded and analyzed. This data was used to develop a saccade detection algorithm adapted to virtual environments. The detectability of trans-saccadic scene changes was evaluated using images of high resolution real world scenes. The images were translated by 0.4, 0.8 or 1.2 degrees of visual angle during horizontal saccades. The scene updates were rarely noticeable for saccades with a duration greater than 58 ms. The detection rate for the smallest translation was just 6.25%. Qualitatively, even when trans-saccadic scene changes were detectible, they were much less disturbing than equivalent changes in the absence of a saccade.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anderson Maciel, Sofiane Sarni, Olivier Buchwalder, R. Boulic, D. Thalmann
{"title":"Multi-Finger Haptic Rendering of Deformable Objects","authors":"Anderson Maciel, Sofiane Sarni, Olivier Buchwalder, R. Boulic, D. Thalmann","doi":"10.2312/EGVE/EGVE04/105-112","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/105-112","url":null,"abstract":"The present paper describes the integration of a multi-finger haptic device with deformable objects in an interactive environment. Repulsive forces are synthesized and rendered independently for each finger of a user wearing a Cybergrasp force-feedback glove. Deformation and contact models are based on mass-spring systems, and the issue of the user independence is dealt with through a geometric calibration phase. Motivated by the knowledge that human hand plays a very important role in the somatosensory system, we focused on the potential of the Cybergrasp device to improve perception in Virtual Reality worlds. We especially explored whether it is possible to distinguish objects with different elasticities. Results of performance and perception tests are encouraging despite current technical and computational limitations.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122539467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Experimental Comparison of Three Optical Trackers for Model Based Pose Determination in Virtual Reality","authors":"R. V. Liere, A. V. Rhijn","doi":"10.2312/EGVE/EGVE04/025-034","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/025-034","url":null,"abstract":"In recent years many optical trackers have been proposed for usage in Virtual Environments. In this paper, we compare three model based optical tracking algorithms for pose determination of input devices. In particular, we study the behavior of these algorithms when applied to two-handed manipulation tasks. We experimentally show how critical parameters influence the relative accuracy, latency and robustness of each algorithm. Although the study has been performed in a specific near-field virtual environment, the results can be applied to other virtual environments such as workbenches and CAVEs.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130881120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Medical Augmented Reality based on Commercial Image Guided Surgery","authors":"J. Fischer, M. Neff, D. Freudenstein, D. Bartz","doi":"10.2312/EGVE/EGVE04/083-086","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/083-086","url":null,"abstract":"Utilizing augmented reality for applications in medicine has been a topic of intense research for several years. A number of challenging tasks need to be addressed when designing a medical AR system. These include the import and management of medical datasets and preoperatively created planning data, the registration of the patient with respect to a global coordinate system, and accurate tracking of the camera used in the AR setup as well as the respective surgical instruments. Most research systems rely on specialized hardware or algorithms for realizing augmented reality in medicine. Such base technologies can be expensive or very time-consuming to implement. In this paper, we propose an alternative approach of building a surgical AR system by harnessing existing, commercially available equipment for image guided surgery (IGS). We describe the prototype of an augmented reality application, which receives all necessary information from a device for intraoperative navigation.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126228660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Foveated Stereoscopic Display for the Visualization of Detailed Virtual Environments","authors":"G. Godin, P. Massicotte, L. Borgeat","doi":"10.2312/EGVE/EGVE04/007-016","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/007-016","url":null,"abstract":"We present a new method for the stereoscopic display of complex virtual environments using a foveated arrangement of four images. The system runs on four rendering nodes and four projectors, for the fovea and periphery in each eye view. The use of high-resolution insets in a foveated configuration is well known. However, its extension to projector-based stereoscopic displays raises a specific issue: the visible boundary between fovea and periphery present in each eye creates a stereoscopic cue that may conflict with the perceived depth of the underlying scene. A previous solution to this problem displaces the boundary in the images to ensure that it is always positioned over stereoscopically corresponding scene locations. The new method proposed here addresses the same problem, but by relaxing the stereo matching criteria and reformulating the problem as one of spatial partitioning, all computations are performed locally on each node, and require a small and fixed amount of post-rendering processing, independent of scene complexity. We discuss this solution and present an OpenGL implementation; we also discuss acceleration techniques using culling and fragments, and illustrate the use of the method on a complex 3D textured model of a Byzantine crypt built using laser range imaging and digital photography.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121768201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marc Chevaldonné, M. Neveu, F. Mérienne, M. Dureigne, N. Chevassus, F. Guillaume
{"title":"Digital Mock-up database simplification with the help of view and application dependent criteria for industrial Virtual Reality application","authors":"Marc Chevaldonné, M. Neveu, F. Mérienne, M. Dureigne, N. Chevassus, F. Guillaume","doi":"10.2312/EGVE/EGVE04/113-122","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/113-122","url":null,"abstract":"Aircraft cockpits are advanced interfaces dedicated to the interaction and exchange of observations and commands between the pilot and the flying system. The design process of cockpits is benefiting from the use of Virtual Reality technologies: early ergonomics and layout analysis through the exploration of numerous alternatives, availability all along the cockpit life cycle of a virtual product ready for experimentation, reduced usage of costly physical mock-ups. \u0000 \u0000Nevertheless, the construction of a virtual cockpit with the adequate performances is very complex. Due to the fact that the CAD based digital mock-up used for setting up the virtual cockpit is very large, one challenge is to achieve interactivity while maintaining the quality of rendering. The reduction of the information contained in the CAD database shall achieve a sufficient frame rate without degradation of the geometrical visual quality of the virtual cockpit which would alleviate the relevance of ergonomics and layout studies. \u0000 \u0000This paper proposes to control the simplification process by using objective criteria based on considerations about the cockpit application and the visual performances of human beings. First, it presents the results of studies on the characteristics of the Human Visual System linked to virtual reality and visualization applications. Illustrated by first results, it establishes how to control simplifications in a rational and automatic way.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116907920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lateral Head Tracking in Desktop Virtual Reality","authors":"Breght R. Boschker, J. D. Mulder","doi":"10.2312/EGVE/EGVE04/045-052","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE04/045-052","url":null,"abstract":"Head coupled perspective is often considered to be an essential aspect of stereoscopic desktop virtual reality (VR) systems. Such systems use a tracking device to determine the user's head pose in up to six degrees of freedom (DOF). Users of desktop VR systems perform their task while sitting down and therefore the extent of head movements is limited. This paper investigates the validity of using a head tracking system for desktop VR that only tracks lateral head movement. Users performed a depth estimation task under full (six DOF) head tracking, lateral head tracking, and disabled head tracking. Furthermore, we considered stereoscopic and monoscopic viewing. Our results show that user performance was not significantly affected when incorporating only lateral head motion. Both lateral and full head tracking performed better than the disabled head tracking case.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126787367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}