M. Reunanen, K. Palovuori, T. Ilmonen, Wille Mäkelä
{"title":"Näprä: affordable fingertip tracking with ultrasound","authors":"M. Reunanen, K. Palovuori, T. Ilmonen, Wille Mäkelä","doi":"10.2312/EGVE/IPT_EGVE2005/051-058","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/051-058","url":null,"abstract":"In this paper we present Näprä, a novel tracking device suitable for fine motor interaction. The motivation for building the device was the need to track users' fingertips in an immersive free-hand drawing environment. Such tracking offers significant benefits for fine-grained artwork. Out of the numerous tracking methods ultrasound was chosen because of its affordability and low computational requirements. The design and implementation of both the hardware and software are discussed in detail in their respective sections. The device was evaluated in practice by two user tests, the first involving ten professional artists and the latter seven ordinary users. The results obtained in the tests are presented to reader as well as some directions for future work.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117334768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Á. Río, J. Fischer, M. Köbele, D. Bartz, W. Straßer
{"title":"Augmented reality interaction for semiautomatic volume classification","authors":"Á. Río, J. Fischer, M. Köbele, D. Bartz, W. Straßer","doi":"10.2312/EGVE/IPT_EGVE2005/113-120","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/113-120","url":null,"abstract":"In the visualization of 3D medical data, the appropriateness of the achieved result is highly dependent on the application. Therefore, an intuitive interaction with the user is of utter importance in order to determine the particular aim of the visualization. In this paper, we present a novel approach for the visualization of 3D medical data with volume rendering combined with AR-based user interaction. The utilization of augmented reality (AR), with the assistance of a set of simple tools, allows the direct manipulation in 3D of the rendered data. The proposed method takes into account regions of interest defined by the user and employs this information to automatically generate an adequate transfer function. Machine learning techniques are utilized for the automatic creation of transfer functions, which are to be used during the classification stage of the rendering pipeline. The validity of the proposed approach for medical applications is illustrated.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122760460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of collaborative construction in mixed reality","authors":"Breght R. Boschker, J. D. Mulder","doi":"10.2312/EGVE/IPT_EGVE2005/171-179","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/171-179","url":null,"abstract":"Collaborative virtual and augmented reality are an active area of research and many systems supporting collaboration have been presented. Just like there are many different systems for VR and AR, there are many different types of collaboration. In some cases, virtual reality is used to enhance an existing collaborative process. In other cases, it enables new types of collaboration that previously were not possible (e.g. distributed VR). Other systems support tasks that can be performed either individually as well as collaboratively. While these tasks may allow to be performed collaboratively, little has been said on what the benefit is in doing so. We present a user study of a collaborative construction task in a shared physical workspace virtual reality environment under various degrees of interaction in collaboration. Our results show that, for this type of task, a pair of subjects concurrently interacting can be significantly more effective, even though individual user performance decreases. Our results further show that there is no significant benefit in giving only verbal and non-verbal assistance over a single user performing the task.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Binaural acoustics for CAVE-like environments without headphones","authors":"I. Assenmacher, T. Kuhlen, T. Lentz","doi":"10.2312/EGVE/IPT_EGVE2005/031-040","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/031-040","url":null,"abstract":"The human auditory system, in contrast to the human visual system, can perceive input from all directions and has no limited field of view. As such, it provides valuable cues for navigation and orientation in virtual environments. However, audio stimuli are not that common in today's Virtual Reality applications, and this might result from the lack of middleware or user acceptance due to the need for specialized or costly hardware. Surprisingly, the lack of headphone-less near body acoustics is widely accepted, and simple intensity panning approaches that enable plausible spatial audio are used. This paper describes a networked environment for sophisticated binaural synthesis-based audio rendering in visual VR applications for a freely moving listener in a CAVE-like environment without the use of headphones. It describes the binaural acoustics rendering technique and a dynamic crosstalk cancellation system for four loudspeakers. In addition to that, synchronization issues and network coupling together with performance measurements that proof the applicability of the system in interactive Virtual Environments are discussed.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125213706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A game engine-based multi-projection virtual environment with system-level synchronization","authors":"N. Hashimoto, Y. Ishida, Makoto Sato","doi":"10.2312/EGVE/IPT_EGVE2005/069-078","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/069-078","url":null,"abstract":"In multi-projector displays, which surround users with high-resolution images, a PC-Cluster is often used for realistic and real-time image generation. However, developing applications that support parallel processing on the PC-Cluster is quite troublesome. It is also difficult to acquire sufficient rendering performance because of the limited bandwidth of the PC-Cluster. Therefore, we aim to achieve affordable and accessible software environments for the multi-projector displays. In this paper, we describe a self-distributing software environment for inheriting existent game engines which provide basic functions of realizing virtual environments. This environment achieves minimum data communication based on a master-slave model. The communication mechanism is automatically applied to target applications by intercepting APIs. Hence we can directly exploit high-capability of the existing game engines on the multi-projector displays.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130083501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interacting with molecular structures: user performance versus system complexity","authors":"R. V. Liere, J. Martens, A. Kok, M. V. Tienen","doi":"10.2312/EGVE/IPT_EGVE2005/147-156","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/147-156","url":null,"abstract":"Effective interaction in a virtual environment requires that the user can adequately judge the spatial relationships between the objects in a 3D scene. In order to accomplish adequate depth perception, existing virtual environments create useful perceptual cues through stereoscopy, motion parallax and (active or passive) haptic feedback. Specific hardware, such as high-end monitors with stereoscopic glasses, head-mounted tracking and mirrors are required to accomplish this. Many potential VR users however refuse to wear cumbersome devices and to adjust to an imposed work environment, especially for longer periods of time. It is therefore important to quantify the repercussions of dropping one or more of the above technologies. These repercussions are likely to depend on the application area, so that comparisons should be performed on tasks that are important and/or occur frequently in the application field of interest.\u0000 In this paper, we report on a formal experiment in which the effects of different hardware components on the speed and accuracy of three-dimensional (3D) interaction tasks are established. The tasks that have been selected for the experiment are inspired by interactions and complexities, as they typically occur when exploring molecular structures. From the experimental data, we develop linear regression models to predict the speed and accuracy of the interaction tasks. Our findings show that hardware supported depth cues have a significant positive effect on task speed and accuracy, while software supported depth cues, such as shadows and perspective cues, have a negative effect on trial time. The task trial times are smaller in a simple fish-tank like desktop environment than in a more complex co-location enabled environment, sometimes at the cost of reduced accuracy.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114258250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic bounding volume hierarchies for occlusion culling","authors":"V. Kovalcík, P. Tobola","doi":"10.2312/EGVE/IPT_EGVE2005/091-096","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/091-096","url":null,"abstract":"We present an algorithm for rendering complex scenes using occlusion queries to resolve visibility. To organize objects in the scene, the algorithm uses a ternary tree which is dynamically modified according to the current view and positions of the objects in the scene. Aside from using heuristic techniques to estimate unnecessary queries, the algorithm uses several new features to estimate the set of visible objects more precisely while still retaining the conservativeness. The algorithm is suitable for both static and dynamic scenes with huge number of moving objects.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124244104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. J. Griffith, F. Post, M. Koutek, T. Heus, H. Jonker
{"title":"Feature tracking in VR for cumulus cloud life-cycle studies","authors":"E. J. Griffith, F. Post, M. Koutek, T. Heus, H. Jonker","doi":"10.2312/EGVE/IPT_EGVE2005/121-128","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/121-128","url":null,"abstract":"Feature tracking in large data sets is traditionally an off-line, batch processing operation while virtual reality typically focuses on highly interactive tasks and applications. This paper presents an approach that uses a combination of off-line preprocessing and interactive visualization in VR to simplify and speed up the identification of interesting features for further study. We couch the discussion in terms of our collaborative research on using virtual reality for cumulus cloud life-cycle studies, where selecting suitable clouds for study is simple for the skilled observer but difficult to formalize. The preprocessing involves identifying individual clouds within the data set through a 4D connected components algorithm, and then saving isosurface, bounding box, and volume information. This information is then interactively visualized in our VR Cloud Explorer with various tools and information displays to identify the most interesting clouds. In a small pilot study, reasonable performance, both in the preprocessing phase and the visualization phase, has been measured.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114808047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optical tracking and calibration of tangible interaction devices","authors":"A. V. Rhijn, J. D. Mulder","doi":"10.2312/EGVE/IPT_EGVE2005/041-050","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/041-050","url":null,"abstract":"In this paper, a novel optical tracking and object calibration system is presented for the recognition and pose estimation of tangible interaction devices for virtual and augmented reality systems. The calibration system allows a user to automatically generate models of the relative positions of point-shaped markers attached to interaction devices, simply by moving them in front of the cameras. There are virtually no constraints on the shape of interaction devices. The tracking method takes the calibrated models as input, and recognizes devices by subgraph matching. Both the calibration and tracking methods can handle partial occlusion. Results show the proposed techniques are efficient, accurate, and robust.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122529301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Language learning in virtual environments: 'bobo and apples'","authors":"H. Holmen, F. Nielsen","doi":"10.2312/EGVE/IPT_EGVE2005/141-145","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/141-145","url":null,"abstract":"'Bobo and Apples' is one of the prototype games within SAME4KIDS (Speech-based,Animated, Multilingual,Educational games for Kids, http://same4kids.sourceforge.net/), a multi-language and multi-purpose games project for young kids of age 3-5 years. The main goal of SAME4KIDS is to expose young learners to multi-module games in various available platforms. In the prototype of 'Bobo and Apples', the game is designed to teach multiple languages and simple math within a frame of virtual environment, using mainly visual images, animation and sound. In this paper we introduce the main design concept and architecture for the prototype, as well as the envisioned VR conversion of the game, based on the Animarium system.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122810290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}