J. Chastine, Kristine S. Nagel, Ying Zhu, Luca Yearsovich
{"title":"Understanding the design space of referencing in collaborative augmented reality environments","authors":"J. Chastine, Kristine S. Nagel, Ying Zhu, Luca Yearsovich","doi":"10.1145/1268517.1268552","DOIUrl":"https://doi.org/10.1145/1268517.1268552","url":null,"abstract":"For collaborative environments to be successful, it is critical that participants have the ability to generate effective references. Given the heterogeneity of the objects and the myriad of possible scenarios for collaborative augmented reality environments, generating meaningful references within them can be difficult. Participants in co-located physical spaces benefit from non-verbal communication, such as eye gaze, pointing and body movement; however, when geographically separated, this form of communication must be synthesized using computer-mediated techniques. We have conducted an exploratory study using a collaborative building task of constructing both physical and virtual models to better understand inter-referential awareness -- or the ability for one participant to refer to a set of objects, and for that reference to be understood. Our contributions are not necessarily in presenting novel techniques, but in narrowing the design space for referencing in collaborative augmented reality. This study suggests collaborative reference preferences are heavily dependent on the context of the workspace.","PeriodicalId":197912,"journal":{"name":"International Genetic Improvement Workshop","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121034361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Twinned meshes for dynamic triangulation of implicit surfaces","authors":"Antoine Bouthors, Matthieu Nesme","doi":"10.1145/1268517.1268521","DOIUrl":"https://doi.org/10.1145/1268517.1268521","url":null,"abstract":"We introduce a new approach to mesh an animated implicit surface for rendering. Our contribution is a method which solves stability issues of implicit triangulation, in the scope of real-time rendering. This method is robust, moreover it provides interactive and quality rendering of animated or manipulated implicit surfaces.\u0000 This approach is based on a double triangulation of the surface, a mechanical one and a geometric one. In the first triangulation, the vertices are the nodes of a simplified mechanical finite element model. The aim of this model is to uniformly and dynamically sample the surface. It is robust, efficient and prevents the inversion of triangles. The second triangulation is dynamically created from the first one at each frame. It is used for rendering and provides details in regions of high curvature. We demonstrate this technique with skeleton-based and volumetric animated surfaces.","PeriodicalId":197912,"journal":{"name":"International Genetic Improvement Workshop","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123753243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adapting wavelet compression to human motion capture clips","authors":"Philippe Beaudoin, Pierre Poulin, M. V. D. Panne","doi":"10.1145/1268517.1268568","DOIUrl":"https://doi.org/10.1145/1268517.1268568","url":null,"abstract":"Motion capture data is an effective way of synthesizing human motion for many interactive applications, including games and simulations. A compact, easy-to-decode representation is needed for the motion data in order to support the real-time motion of a large number of characters with minimal memory and minimal computational overheads. We present a wavelet-based compression technique that is specially adapted to the nature of joint angle data. In particular, we define wavelet coefficient selection as a discrete optimization problem within a tractable search space adapted to the nature of the data. We further extend this technique to take into account visual artifacts such as footskate. The proposed techniques are compared to standard truncated wavelet compression and principal component analysis based compression. The fast decompression times and our focus on short, recomposable animation clips make the proposed techniques a realistic choice for many interactive applications.","PeriodicalId":197912,"journal":{"name":"International Genetic Improvement Workshop","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128491675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}