International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments最新文献

筛选
英文 中文
Widget manipulation revisited: a case study in modeling interactions between experimental conditions 小部件操作重访:实验条件之间建模交互的案例研究
J. Martens, A. Kok, R. V. Liere
{"title":"Widget manipulation revisited: a case study in modeling interactions between experimental conditions","authors":"J. Martens, A. Kok, R. V. Liere","doi":"10.2312/EGVE/IPT_EGVE2007/053-060","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2007/053-060","url":null,"abstract":"Widgets are often used to perform control tasks in three-dimensional (3D) virtual environments (VEs). Spatial interactions through widgets require precise 3D manipulations, and several design aspects of VEs contribute to the ease, accuracy, and speed with which users can perform these interactions. Throughout the years, VE researchers have studied relevant design aspects; for example, the location and size of the widgets, monoscopic versus stereoscopic viewing, the presence or absence of co-location, or the inclusion of (passive) tactile feedback, are all design aspects that have been studied. However, researchers have mostly studied design aspects in isolation and have paid little attention to possible interactions between conditions.\u0000 In this paper, we introduce a method for modeling interaction effects between experimental conditions and illus- trate it using data from a specific case study, i.e., widget manipulation tasks. More specifically, we model how the effect of passive tactile feedback interacts with stereoscopic viewing for three widget manipulation tasks. We also model how these effects vary between two tasks, i.e., button and menu item selection. Models that include inter- action effects between experimental conditions can be used to get a deeper understanding in the system design trade-offs of a virtual environment.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114639561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A comparison of tracking- and controller-based input for complex bimanual interaction in virtual environments 虚拟环境中基于跟踪和控制器的复杂人机交互输入的比较
André Kunert, Alexander Kulik, A. Huckauf, B. Fröhlich
{"title":"A comparison of tracking- and controller-based input for complex bimanual interaction in virtual environments","authors":"André Kunert, Alexander Kulik, A. Huckauf, B. Fröhlich","doi":"10.2312/EGVE/IPT_EGVE2007/043-052","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2007/043-052","url":null,"abstract":"We describe a user study comparing a two-handed controller-based input device to a two-handed tracking solution, both offering the control space of six degrees of freedom to each hand. For benchmarking the different input modalities we implemented a set of evaluation tasks requiring viewpoint navigation, selection and object manipulation in a maze-like virtual environment. The results of the study reveal similar overall performance for both input modalities for compound tasks. However significant differences with respect to the involved subtasks were found. Furthermore we can show that the integral attributes of a subtask do not necessarily need to be manipulated by a single hand. Instead, the simultaneously required degrees of freedom for operating integrally perceived subtasks may also be distributed to both hands for better control.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134531920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Three extensions to subtractive crosstalk reduction 减法串扰消减的三个扩展
F. Smit, R. V. Liere, B. Fröhlich
{"title":"Three extensions to subtractive crosstalk reduction","authors":"F. Smit, R. V. Liere, B. Fröhlich","doi":"10.2312/EGVE/IPT_EGVE2007/085-092","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2007/085-092","url":null,"abstract":"Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly fuse stereoscopic images. In this paper, three extensions for improved software crosstalk reduction are introduced. First, we propose a reduction method operating in CIELAB color space to find a perceptually better color match for crosstalk corrected pixels. Second, we introduce a geometry-based reduction method that operates on fused 3D pixels. Finally, a run-time optimization is introduced that avoids the need to process each pixel. We evaluated our CIELAB-based method using the Visible Differences Predictor (VDP). Our results show that we are able to significantly improve crosstalk reduction compared to previously used methods that operate in RGB color space. The combination of our methods provides an improved, real-time software crosstalk reduction framework, applicable to a wider range of scenes, delivering better quality, higher performance, and more flexibility.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124066762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Wearable mixed reality system In less than 1 pound
A. Peternier, F. Vexo, D. Thalmann
{"title":"Wearable mixed reality system In less than 1 pound","authors":"A. Peternier, F. Vexo, D. Thalmann","doi":"10.2312/EGVE/EGVE06/035-044","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/035-044","url":null,"abstract":"We have designed a wearable Mixed Reality (MR) framework which allows to real-time render game-like 3D scenes on see-through head-mounted displays (see through HMDs) and to localize the user position within a known internet wireless area. Our equipment weights less than 1 Pound (0.45 Kilos). The information visualized on the mobile device could be sent on-demand from a remote server and realtime rendered onboard.We present our PDA-based platform as a valid alternative to use in wearable MR contexts under less mobility and encumbering constraints: our approach eliminates the typical backpack with a laptop, a GPS antenna and a heavy HMD usually required in this cases. A discussion about our results and user experiences with our approach using a handheld for 3D rendering is presented as well.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129385750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Managing missed interactions in distributed virtual environments 管理分布式虚拟环境中错过的交互
S. Parkin, Péter András, G. Morgan
{"title":"Managing missed interactions in distributed virtual environments","authors":"S. Parkin, Péter András, G. Morgan","doi":"10.2312/EGVE/EGVE06/027-034","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/027-034","url":null,"abstract":"A scalable distributed virtual environment (DVE) may be achieved by ensuring virtual world objects communicate their actions to only those objects that fall within their influence, reducing the need to send and process unnecessary messages. A missed interaction may be defined as a failure to exchange messages to appropriately model object interaction. A number of parameters under the control of a DVE developer may influence the possibility of missed interactions occurring (e.g., object velocities, area of influence). However, due to the complexities associated with object movement and the deployment environment (e.g., non-deterministic object movement, network latency), identifying the value for such parameters to minimise missed interactions while maintaining scalability (minimal message passing) is not clear. We present in this paper a tool which simulates a DVE and provides developers with an indication of the appropriate values for parameters when balancing missed interactions against scalability.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132918694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Measuring the discernability of virtual objects in conventional and stylized augmented reality 测量传统和程式化增强现实中虚拟物体的可识别性
J. Fischer, D. Cunningham, D. Bartz, C. Wallraven, H. Bülthoff, W. Straßer
{"title":"Measuring the discernability of virtual objects in conventional and stylized augmented reality","authors":"J. Fischer, D. Cunningham, D. Bartz, C. Wallraven, H. Bülthoff, W. Straßer","doi":"10.2312/EGVE/EGVE06/053-061","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/053-061","url":null,"abstract":"In augmented reality, virtual graphical objects are overlaid over the real environment of the observer. Conventional augmented reality systems normally use standard real-time rendering methods for generating the graphical representations of virtual objects. These renderings contain the typical artifacts of computer generated graphics, e.g., aliasing caused by the rasterization process and unrealistic, manually configured illumination models. Due to these artifacts, virtual objects look artifical and can easily be distinguished from the real environment.\u0000 A different approach to generating augmented reality images is the basis of stylized augmented reality [FBS05c]. Here, similar types of artistic or illustrative stylization are applied to the virtual objects and the camera image of the real enviroment. Therefore, real and virtual image elements look significantly more similar and are less distinguishable from each other.\u0000 In this paper, we present the results of a psychophysical study on the effectiveness of stylized augmented reality. In this study, a number of participants were asked to decide whether objects shown in images of augmented reality scenes are virtual or real. Conventionally rendered as well as stylized augmented reality images and short video clips were presented to the participants. The correctness of the participants' responses and their reaction times were recorded. The results of our study show that an equalized level of realism is achieved by using stylized augmented reality, i.e., that it is significantly more difficult to distinguish virtual objects from real objects.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123107080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
A multi modal table-top 3D modeling tool in augmented environments 增强环境中的多模态桌面3D建模工具
Thomas Novotny, I. Lindt, Wolfgang Broll
{"title":"A multi modal table-top 3D modeling tool in augmented environments","authors":"Thomas Novotny, I. Lindt, Wolfgang Broll","doi":"10.2312/EGVE/EGVE06/045-052","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/045-052","url":null,"abstract":"Even with today's highly sophisticated 3D modeling programs, creating, assembling and adapting 3D models is still a big challenge for inexperienced users. In this paper we present our approach of an intuitive table-top 3D modeling tool in Augmented Reality. It allows the author to view 3D virtual objects within his natural working environment, to manipulate them and to create new 3D elements easily. The offered interaction techniques support the author's activity by a combination of tangible user interfaces with voice recognition, a gaze-based view pointer and 3D widgets as components of a multi modal user interface. Within the scope of this work, intuitive interaction techniques were realized to offer the participants an easy way of working within an augmented environment. User tests were performed to compare our approach to a WIMP-based desktop application and to an alternative AR modeling application.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124444796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Model-based hybrid tracking for medical augmented reality 基于模型的医疗增强现实混合跟踪
J. Fischer, Michael Eichler, D. Bartz, W. Straßer
{"title":"Model-based hybrid tracking for medical augmented reality","authors":"J. Fischer, Michael Eichler, D. Bartz, W. Straßer","doi":"10.2312/EGVE/EGVE06/071-080","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/071-080","url":null,"abstract":"Camera pose estimation is one of the most important, but also one of the most challenging tasks in augmented reality. Without a highly accurate estimation of the position and orientation of the digital video camera, it is impossible to render a spatially correct overlay of graphical information. This requirement is even more crucial in medical applications, where the virtual objects are supposed to be correctly aligned with the patient. Many medical AR systems use specialized tracking devices, which can be of limited suitability for real-world scenarios. We have developed an AR framework for surgical applications based on existing medical equipment. A surgical navigation device delivers tracking information measured by a built-in infrared camera system, which is the basis for the pose estimation of the AR video camera. However, depending on the conditions in the environment, this infrared pose data can contain discernible tracking errors. One main drawback of the medical tracking device is the fact that, while it delivers a very high positional accuracy, the reported camera orientation can contain a relatively large error.\u0000 In this paper, we present a hybrid tracking scheme for medical augmented reality based on a certified medical tracking system. The final pose estimation takes the inital infrared tracking data as well as salient features in the camera image into account. The vision-based component of the tracking algorithm relies on a pre-defined graphical model of the observed scene. The infrared and vision-based tracking data are tightly integrated into a unified pose estimation algorithm. This algorithm is based on an iterative numerical optimization method. We describe an implementation of the algorithm and present experimental data showing that our new method is capable of delivering a more accurate pose estimation.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133560010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A model for the expected running time of collision detection using AABBs trees 基于AABBs树的碰撞检测预期运行时间模型
René Weller, Jan Klein, G. Zachmann
{"title":"A model for the expected running time of collision detection using AABBs trees","authors":"René Weller, Jan Klein, G. Zachmann","doi":"10.2312/EGVE/EGVE06/011-017","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/011-017","url":null,"abstract":"In this paper, we propose a model to estimate the expected running time of hierarchical collision detection that utilizes AABB trees, which are a frequently used type of bounding volume (BV).\u0000 We show that the average running time for the simultaneous traversal of two binary AABB trees depends on two characteristic parameters: the overlap of the root BVs and the BV diminishing factor within the hierarchies. With this model, we show that the average running time is in O(n) or even in O(logn) for realistic cases. Finally, we present some experiments that confirm our theoretical considerations.\u0000 We believe that our results are interesting not only from a theoretical point of view, but also for practical applications, e. g., in time-critical collision detection scenarios where our running time prediction could help to make the best use of CPU time available.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125162799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Camera setup optimization for optical tracking in virtual environments 虚拟环境中光学跟踪的相机设置优化
Philippe Cerfontaine, M. Schirski, Daniel Bündgens, T. Kuhlen
{"title":"Camera setup optimization for optical tracking in virtual environments","authors":"Philippe Cerfontaine, M. Schirski, Daniel Bündgens, T. Kuhlen","doi":"10.2312/EGVE/EGVE06/081-088","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/081-088","url":null,"abstract":"In this paper we present a method for finding the optimal camera alignment for a tracking system with multiple cameras, by specifying the volume that should be tracked and an initial camera setup. The approach we use is twofold: on the one hand, we use a rather simple gradient based steepest descent method and on the other hand, we also implement a simulated annealing algorithm that features guaranteed optimality assertions. Both approaches are fully automatic and take advantage of modern graphics hardware since we implemented a GPU-based accelerated visibility test. The proposed algorithms can automatically optimize the whole camera setup by adjusting the given set of parameters. The optimization may have different goals depending on the desired application, e.g. one may wish to optimize towards the widest possible coverage of the specified volume, while others would prefer to maximize the number of cameras seeing a certain area to overcome heavy occlusion problems during the tracking process. Our approach also considers parameter constraints that the user may specify according to the local environment where the cameras have to be set up. This makes it possible to simply formulate higher level constraints e.g. all cameras have a vertical up vector. It individually adapts the optimization to the given situation and also asserts the feasibility of the algorithm's output.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127173151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信