Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia最新文献

筛选
英文 中文
On creative engagement with interactive art 关于互动艺术的创造性参与
E. Edmonds
{"title":"On creative engagement with interactive art","authors":"E. Edmonds","doi":"10.1145/1101389.1101450","DOIUrl":"https://doi.org/10.1145/1101389.1101450","url":null,"abstract":"The paper is concerned with the design of interactive art systems intended for display in public locations. It reviews approaches to interactive art systems and discusses the issue of creative engagement with them by the active audience. An approach to elaborating a model of creative engagement is described and exploratory work on its refinement is reported.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122260358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive T-spline surface fitting to z-map models 自适应t样条曲面拟合z-map模型
Jianmin Zheng, Yimin Wang, S. H. Soon
{"title":"Adaptive T-spline surface fitting to z-map models","authors":"Jianmin Zheng, Yimin Wang, S. H. Soon","doi":"10.1145/1101389.1101468","DOIUrl":"https://doi.org/10.1145/1101389.1101468","url":null,"abstract":"Surface fitting refers to the process of constructing a smooth representation for an object surface from a fairly large number of measured 3D data points. This paper presents an automatic algorithm to construct smooth parametric surfaces using T-splines from z-map data. The algorithm begins with a rough surface approximation and then progressively refines it in the regions where the approximation accuracy does not meet the requirement. The topology of the resulting T-spline surface is determined adaptively based on the local geometric character of the input data and the geometry of the control points is obtained by a least squares procedure. The advantage of the approach is that the resulting surface is C2 continuous and the refinement is essentially local, resulting in a small number of control points for the surface.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115308453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Velocity driven haptic rendering 速度驱动的触觉渲染
Pavel Kolcárek, Jirí Sochor
{"title":"Velocity driven haptic rendering","authors":"Pavel Kolcárek, Jirí Sochor","doi":"10.1145/1101389.1101465","DOIUrl":"https://doi.org/10.1145/1101389.1101465","url":null,"abstract":"This paper presents a compact method for improving haptic rendering in virtual environments using Level Of Detail (LOD) with force feedback. We have introduced a velocity driven LOD, a new concept in using LOD in haptics. The aim of this approach differs from using LOD techniques in graphics. Our method simplifies the model according to the speed of the user's fingertip. The users fly quickly through the scene and get a rough overview of it. As they slow down, they obtain more and more details. We have enhanced this basic idea by smooth switching between two successive LODs and by mechanisms leading to a more natural perception of dynamically changing surfaces.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132348260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
View-dependent tetrahedral meshing and rendering 依赖于视图的四面体网格划分和渲染
Ralf Sondershaus, Wolfgang Straßer
{"title":"View-dependent tetrahedral meshing and rendering","authors":"Ralf Sondershaus, Wolfgang Straßer","doi":"10.1145/1101389.1101394","DOIUrl":"https://doi.org/10.1145/1101389.1101394","url":null,"abstract":"Interactive exploration of huge tetrahedral meshes is required by many applications but the limitations of the current hardware do not allow for the full dataset to be rendered at interactive frame rates. Multi resolution representations are an important tool to adapt the tetrahedral mesh complexity to the current viewing parameters in real time rendering environments.We present a meshing framework that builds a compact multi resolution representation for large tetrahedral meshes. A preprocessing step simplifies the mesh into a binary vertex hierarchy which is used at run time to adapt the mesh to viewing parameters. Exploiting the redundancy in the connectivity information of the mesh enables us to store the vertex hierarchy compactly such that a vertex split or edge collapse can be performed by knowing incremental updates only.We integrated this multiresolution representation into a volume rendering environment that supports direct volume rendering as well as a new point based rendering approach.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134410375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Real-time high-quality View-Dependent Texture Mapping using per-pixel visibility 使用逐像素可见性的实时高质量视图依赖纹理映射
D. Porquet, J. Dischler, D. Ghazanfarpour
{"title":"Real-time high-quality View-Dependent Texture Mapping using per-pixel visibility","authors":"D. Porquet, J. Dischler, D. Ghazanfarpour","doi":"10.1145/1101389.1101432","DOIUrl":"https://doi.org/10.1145/1101389.1101432","url":null,"abstract":"We present an extension of View-Dependent Texture Mapping (VDTM) allowing rendering of complex geometric meshes at high frame rates without usual blurring or skinning artifacts. We combine a hybrid geometric and image-based representation of a given 3D object to speed-up rendering at the cost of a little loss of visual accuracy.During a precomputation step, we store an image-based version of the original mesh by simply and quickly computing textures from viewpoints positionned around it by the user. During the rendering step, we use these textures in order to map on the fly colors and geometric details onto the surface of a low-polygon-count version of the mesh.Real-time rendering is achieved while combining up to three viewpoints at a time, using pixel shaders. No parameterization of the mesh is needed and occlusion effects are taken into account while computing on the fly the best viewpoints for a given pixel. Moreover, the integration of this method in common real-time rendering systems is straightforward and allows applying self-shadowing as well as other z-buffer effects.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121064719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Compositing color with texture for multi-variate visualization 合成颜色与纹理的多变量可视化
Haleh Hagh-Shenas, V. Interrante
{"title":"Compositing color with texture for multi-variate visualization","authors":"Haleh Hagh-Shenas, V. Interrante","doi":"10.1145/1101389.1101478","DOIUrl":"https://doi.org/10.1145/1101389.1101478","url":null,"abstract":"Multivariate data visualization requires the development of effective techniques for simultaneously conveying multiple different data distributions over a common domain. Although it is easy to successfully use color to represent the value of a single variable at a given location, effectively using color to represent the values of multiple variables at the same point at the same time is a trickier business. In this paper, we provide a comprehensive overview of strategies for effectively combining color with texture to represent multiple values at a single spatial location, and present a new technique for automatically interweaving multiple colors through the structure of an acquired texture pattern.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129673813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
A loose and sketchy approach in a mediated reality environment 在中介现实环境中使用的一种松散和粗略的方法
M. Haller, Florian Landerl, M. Billinghurst
{"title":"A loose and sketchy approach in a mediated reality environment","authors":"M. Haller, Florian Landerl, M. Billinghurst","doi":"10.1145/1101389.1101463","DOIUrl":"https://doi.org/10.1145/1101389.1101463","url":null,"abstract":"In this paper, we present sketchy-ar-us, a modified, real-time version of the Loose and Sketchy algorithm used to render graphics in an AR environment. The primary challenge was to modify the original algorithm to produce a NPR effect at interactive frame rate. Our algorithm renders moderately complex scenes at multiple frames per second. Equipped with a handheld visor, visitors can see the real environment overlaid with virtual objects with both the real and virtual content rendered in a non-photorealistic style.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128242457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Iris synthesis: a reverse subdivision application 虹膜合成:一种反向细分应用
Lakin Wecker, F. Samavati, M. Gavrilova
{"title":"Iris synthesis: a reverse subdivision application","authors":"Lakin Wecker, F. Samavati, M. Gavrilova","doi":"10.1145/1101389.1101411","DOIUrl":"https://doi.org/10.1145/1101389.1101411","url":null,"abstract":"Due to renewed interest in security, iris images have become a popular biometric alternative to fingerprints for human identification. However, there exist very few databases on which researchers can test iris recognition technology. We present a novel method to augment existing databases through iris image synthesis. A multiresolution technique known as reverse subdivision is used to capture the necessary characteristics from existing irises, which are then combined to form a new iris image. In order to improve the results, a set of heuristics to classify iris images is proposed. We analyze the performance of these heuristics and provide preliminary results of the iris synthesis method.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130579109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Example-based color transformation for image and video 基于示例的图像和视频颜色转换
Youngha Chang, S. Saito, M. Nakajima
{"title":"Example-based color transformation for image and video","authors":"Youngha Chang, S. Saito, M. Nakajima","doi":"10.1145/1101389.1101459","DOIUrl":"https://doi.org/10.1145/1101389.1101459","url":null,"abstract":"Color is very important in setting the mood of images and video sequences. For this reason, color transformation is one of the most important features in photo-editing or video post-production tools because even slight modifications of colors in an image can strongly increase its visual appeal. However, conventional color editing tools require user's manual operation for detailed color manipulation. Such manual operation becomes burden especially when editing video frame sequences. To avoid this problem, we previously suggested a method [Chang et al. 2004] that performs an example-based color stylization of images using perceptual color categories. In this paper, we extend this method to make the algorithm more robust and to stylize the colors of video frame sequences. The main extension is the following 5 points: applicable to images taken under a variety of light conditions; speeding up the color naming step; improving the mapping between source and reference colors when there is a disparity in size of the chromatic categories; separate handling of achromatic categories from chromatic categories; and extending the algorithm along the temporal axis to allow video processing. We present a variety of results, arguing that these images and videos convey a different, but coherent mood.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115995121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Spherical mirror: a new approach to hemispherical dome projection 球面反射镜:半球面投影的一种新方法
P. Bourke
{"title":"Spherical mirror: a new approach to hemispherical dome projection","authors":"P. Bourke","doi":"10.1145/1101389.1101445","DOIUrl":"https://doi.org/10.1145/1101389.1101445","url":null,"abstract":"Planetariums and smaller personal domes can provide an immersive environment for science education, virtual reality, and entertainment [Shaw 1998]. Digital projection into domes, called \"full dome projection\" in the industry, can be a technically challenging and expensive exercise, particularly so for installations with modest budgets. An alternative full dome digital projection system is presented based upon a single projector and a spherical mirror to scatter the light onto the dome surface. The approach offers many advantages over the fisheye lens alternatives, results in a similar quality result, but at a fraction of the cost.","PeriodicalId":286067,"journal":{"name":"Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132754700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信