International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments最新文献

筛选
英文 中文
Hybrid interfaces in VEs: intent and interaction VEs中的混合接口:意图和交互
G. D. Haan, E. J. Griffith, M. Koutek, F. Post
{"title":"Hybrid interfaces in VEs: intent and interaction","authors":"G. D. Haan, E. J. Griffith, M. Koutek, F. Post","doi":"10.2312/EGVE/EGVE06/109-118","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/109-118","url":null,"abstract":"Hybrid user interfaces (UIs) integrate well-known 2D user interface elements into the 3D virtual environment, and provide a familiar and portable interface across a variety of VR systems. However, their usability is often reduced by accuracy and speed, caused by inaccuracies in tracking and a lack of constraints and feedback. To ease these difficulties often large widgets and bulky interface elements must be used, which, at the same time, limit the size of the 3D workspace and restrict the space where other supplemental 2D information can be displayed. In this paper, we present two developments addressing this problem: supportive user interaction and a new implementation of a hybrid interface. First, we describe a small set of tightly integrated 2D windows we developed with the goal of providing increased flexibility in the UI and reducing UI clutter. Next we present extensions to our supportive selection technique, IntenSelect. To better cope with a variety of VR and UI tasks, we extended the selection assistance technique to include direct selection, spring-based manipulation, and specialized snapping behavior. Finally, we relate how the effective integration of these two developments eases some of the UI restrictions and produces a more comfortable VR experience.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115499165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Fast continuous collision detection among deformable models using graphics processors 使用图形处理器的可变形模型之间的快速连续碰撞检测
N. Govindaraju, I. Kabul, M. Lin, Dinesh Manocha
{"title":"Fast continuous collision detection among deformable models using graphics processors","authors":"N. Govindaraju, I. Kabul, M. Lin, Dinesh Manocha","doi":"10.2312/EGVE/EGVE06/019-026","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/019-026","url":null,"abstract":"We present an interactive algorithm to perform continuous collision detection between general deformable models using graphics processors (GPUs). We model the motion of each object in the environment as a continuous path and check for collisions along the paths. Our algorithm precomputes the chromatic decomposition for each object and uses visibility queries on GPUs to quickly compute potentially colliding sets of primitives. We introduce a primitive classification technique to perform efficient continuous self-collision. We have implemented our algorithm on a 3:0 GHz Pentium IV PC with a NVIDIA 7800 GPU, and we highlight its performance on complex simulations composed of several thousands of triangles. In practice, our algorithm is able to detect all contacts, including self-collisions, at image-space precision in tens of milli-seconds.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130677951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
A new view management method for wearable augmented reality systems: emphasizing the user-viewed object and the corresponding annotation 一种新的可穿戴增强现实系统视图管理方法:强调用户查看的对象和相应的注释
Ryuhei Tenmoku, M. Kanbara, N. Yokoya
{"title":"A new view management method for wearable augmented reality systems: emphasizing the user-viewed object and the corresponding annotation","authors":"Ryuhei Tenmoku, M. Kanbara, N. Yokoya","doi":"10.2312/EGVE/EGVE06/127-134","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/127-134","url":null,"abstract":"This paper describes a new view management method for annotation overlay using augmented reality(AR) systems. The proposed method emphasizes the user-viewed object and the corresponding annotation in order to present links between annotations and real objects clearly. This method includes two kinds of techniques for emphasizing the user-viewed object and the annotation. First, the proposed method highlights the object which is gazed at by the user using a 3D model without textures. Secondly, when the user-viewed object is occluded by other objects, the object is complemented by using an image made from a detailed 3D model with textures. This paper also describes experiments which show the feasibility of the proposed method by using a prototype wearable AR system.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124804875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
GA based adaptive sampling for image-based walkthrough 基于遗传算法的图像演练自适应采样
Dong Hoon Lee, Jong Ryul Kim, Soon Ki Jung
{"title":"GA based adaptive sampling for image-based walkthrough","authors":"Dong Hoon Lee, Jong Ryul Kim, Soon Ki Jung","doi":"10.2312/EGVE/EGVE06/135-142","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/135-142","url":null,"abstract":"This paper presents an adaptive sampling method for image-based walkthrough. Our goal is to select minimal sets from the initially dense sampled data set, while guaranteeing a visual correct view from any position in any direction in walkthrough space. For this purpose we formulate the covered region for sampling criteria and then regard the sampling problem as a set covering problem. We estimate the optimal set using Genetic algorithm, and show the efficiency of the proposed method with several experiments.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124468363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GraphTracker: a topology projection invariant optical tracker GraphTracker:一个拓扑投影不变光学跟踪器
F. Smit, A. V. Rhijn, R. V. Liere
{"title":"GraphTracker: a topology projection invariant optical tracker","authors":"F. Smit, A. V. Rhijn, R. V. Liere","doi":"10.2312/EGVE/EGVE06/063-070","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/063-070","url":null,"abstract":"In this paper, we describe a new optical tracking algorithm for pose estimation of interaction devices in virtual and augmented reality. Given a 3D model of the interaction device and a number of camera images, the primary difficulty in pose reconstruction is to find the correspondence between 2D image points and 3D model points. Most previous methods solved this problem by the use of stereo correspondence. Once the correspondence problem has been solved, the pose can be estimated by determining the transformation between the 3D point cloud and the model.\u0000 Our approach is based on the projective invariant topology of graph structures. The topology of a graph structure does not change under projection: in this way we solve the point correspondence problem by a subgraph matching algorithm between the detected 2D image graph and the model graph.\u0000 There are four advantages to our method. First, the correspondence problem is solved entirely in 2D and therefore no stereo correspondence is needed. Consequently, we can use any number of cameras, including a single camera. Secondly, as opposed to stereo methods, we do not need to detect the same model point in two different cameras, and therefore our method is much more robust against occlusion. Thirdly, the subgraph matching algorithm can still detect a match even when parts of the graph are occluded, for example by the users hands. This also provides more robustness against occlusion. Finally, the error made in the pose estimation is significantly reduced as the amount of cameras is increased.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134133279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A survey and taxonomy of 3D menu techniques 三维菜单技术的调查和分类
Raimund Dachselt, Anett Hübner
{"title":"A survey and taxonomy of 3D menu techniques","authors":"Raimund Dachselt, Anett Hübner","doi":"10.2312/EGVE/EGVE06/089-099","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/089-099","url":null,"abstract":"A huge variety of interaction techniques was developed in the field of virtual and augmented reality. Whereas techniques for object selection, manipulation, travel, and wayfinding were covered in existing taxonomies quite in detail, application control techniques were not sufficiently deliberated yet. However, they are needed by almost every mixed reality application, e.g. for choosing from alternative objects or options. For this purpose a great variety of distinct three-dimensional menu selection techniques is available. This paper surveys existing 3D menus from the corpus of literature and classifies them according to various criteria. The taxonomy introduced here assists developers of interactive 3D applications to better evaluate their options when choosing and implementing a 3D menu technique. Since the taxonomy spans the design space for 3D menu solutions, it also aids researchers in identifying opportunities to improve or create novel virtual menu techniques.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130381948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Friction surfaces: scaled ray-casting manipulation for interacting with 2D GUIs 摩擦表面:与2D gui交互的缩放光线投射操作
C. Andújar, F. Argelaguet
{"title":"Friction surfaces: scaled ray-casting manipulation for interacting with 2D GUIs","authors":"C. Andújar, F. Argelaguet","doi":"10.2312/EGVE/EGVE06/101-108","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/101-108","url":null,"abstract":"The accommodation of conventional 2D GUIs with Virtual Environments (VEs) can greatly enhance the possibilities of many VE applications. In this paper we present a variation of the well-known ray-casting technique for fast and accurate selection of 2D widgets over a virtual window immersed into a 3D world. The main idea is to provide a new interaction mode where hand rotations are scaled down so that the ray is constrained to intersect the active virtual window. This is accomplished by changing the control-display ratio between the orientation of the user's hand and the ray used for selection. Our technique uses a curved representation of the ray providing visual feedback of the orientation of both the input device and the selection ray. The users' feeling is that they control a flexible ray that gets curved as it moves over a virtual friction surface defined by the 2D window. We have implemented this technique and evaluated its effectiveness in terms of accuracy and performance. Our experiments on a four-sided CAVE indicate that the proposed technique can increase the speed and accuracy of component selection in 2D GUIs immersed into 3D worlds.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133975617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Interactive data annotation in virtual environments 虚拟环境中的交互式数据注释
I. Assenmacher, B. Hentschel, C. Ni, T. Kuhlen, C. Bischof
{"title":"Interactive data annotation in virtual environments","authors":"I. Assenmacher, B. Hentschel, C. Ni, T. Kuhlen, C. Bischof","doi":"10.2312/EGVE/EGVE06/119-126","DOIUrl":"https://doi.org/10.2312/EGVE/EGVE06/119-126","url":null,"abstract":"Note-taking is an integral part of scientific data analysis. In particular, it is vital for explorative analysis, as the expression and transformation of ideas is a necessary precondition for gaining insight. However, in the case of interactive data exploration in virtual environments it is not possible to keep a pen and pencil at hand. Additionally, data analysis in virtual environments allows the multi-modal exploration of complex and time varying data. We propose the toolkit independent content generation system IDEA that features a defined process model, a generic annotation model with a variety of content types as well as specially developed interaction metaphors for their input and output handling. This allows the user to note ideas, e.g., in form of text, images or voice without interfering with the analysis process. In this paper we present the basic concepts for this system. We describe the context-content model which allows to tie annotation content to logical objects that are part of the scene and stores specific information for the special interaction in virtual environments. The IDEA system is already applied in a prototypical implementation for the exploration of air flows in the human nasal cavity where it is used for data analysis as well as interdisciplinary communication.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2006-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128291585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Colosseum3D: authoring framework for virtual environments Colosseum3D:虚拟环境的创作框架
A. Backman
{"title":"Colosseum3D: authoring framework for virtual environments","authors":"A. Backman","doi":"10.2312/EGVE/IPT_EGVE2005/225-226","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/225-226","url":null,"abstract":"This paper describes an authoring environment for real time 3D environments, Colosseum3D. The framework makes it possible to easily create rich virtual environments with rigid-body dynamics, advanced rendering using OpenGL Shaders, 3D sound and human avatars. The creative process of building complex simulators is supported by allowing several authoring paths such as a low level C++ API, an expressive high level file format and a scripting layer.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130535158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
IntenSelect: using dynamic object rating for assisting 3D object selection intselect:使用动态对象评级来辅助3D对象选择
G. D. Haan, M. Koutek, F. Post
{"title":"IntenSelect: using dynamic object rating for assisting 3D object selection","authors":"G. D. Haan, M. Koutek, F. Post","doi":"10.2312/EGVE/IPT_EGVE2005/201-209","DOIUrl":"https://doi.org/10.2312/EGVE/IPT_EGVE2005/201-209","url":null,"abstract":"We present IntenSelect, a novel selection technique that dynamically assists the user in the selection of 3D objects in Virtual Environments. Ray-casting selection is commonly used, although it has limited accuracy and can be problematic in more difficult situations where the intended selection object is occluded or moving. Selection-byvolume techniques, which extend normal ray-casting, provide error tolerance to cope with the limited accuracy. However, these extensions generally are not usable in the more complex selection situations.We have devised a new selection-by-volume technique to create a more flexible selection technique which can be used in these situations. To achieve this, we use a new scoring function to calculate the score of objects, which fall within a user controlled, conic selection volume. By accumulating these scores for the objects, we obtain a dynamic, time-dependent, object ranking. The highest ranking object, or active object, is indicated by bending the otherwise straight selection ray towards it. As the selection ray is effectively snapped to the object, the user can now select the object more easily. Our user tests indicate that IntenSelect can improve the selection performance over ray-casting, especially in the more difficult cases of small objects. Furthermore, the introduced time-dependent object ranking proves especially useful when objects are moving, occluded and/or cluttered. Our simple scoring scheme can be easily extended for special purpose interaction such as widget or application specific interaction functionality, which creates new possibilities for complex interaction behavior.","PeriodicalId":210571,"journal":{"name":"International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2005-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126240277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 112
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信