Eurographics Symposium on Rendering最新文献

筛选
英文 中文
Statistical acquisition of texture appearance 纹理外观的统计采集
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/031-040
A. Ngan, F. Durand
{"title":"Statistical acquisition of texture appearance","authors":"A. Ngan, F. Durand","doi":"10.2312/EGWR/EGSR06/031-040","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/031-040","url":null,"abstract":"We propose a simple method to acquire and reconstruct material appearance with sparsely sampled data. Our technique renders elaborate view- and light-dependent effects and faithfully reproduces materials such as fabrics and knitwears. Our approach uses sparse measurements to reconstruct a full six-dimensional Bidirectional Texture Function (BTF). Our reconstruction only require input images from the top view to be registered, which is easy to achieve with a fixed camera setup. Bidirectional properties are acquired from a sparse set of viewing directions through image statistics and therefore precise registrations for these views are unnecessary. Our technique is based on multi-scale histograms of image pyramids. The full BTF is generated by matching the corresponding pyramid histograms to interpolated top-view images. We show that the use of multi-scale image statistics achieves a visually plausible appearance. However, our technique does not fully capture sharp specularities or the geometric aspects of parallax. Nonetheless, a large class of materials can be reproduced well with our technique, and our statistical characterization enables acquisition of such materials efficiently using a simple setup.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132773353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Real-time soft shadow mapping by backprojection 实时软阴影映射通过反向投影
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/227-234
Gaël Guennebaud, L. Barthe, M. Paulin
{"title":"Real-time soft shadow mapping by backprojection","authors":"Gaël Guennebaud, L. Barthe, M. Paulin","doi":"10.2312/EGWR/EGSR06/227-234","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/227-234","url":null,"abstract":"We present a new real-time soft shadow algorithm using a single shadow map per light source. Therefore, our algorithm is well suited to render both complex and dynamic scenes, and it handles all rasterizable geometries. The key idea of our method is to use the shadow map as a simple and uniform discretized represention of the scene, thus allowing us to generate realistic soft shadows in most cases. In particular it naturally handles occluder fusion. Also, our algorithm deals with rectangular light sources as well as textured light sources with high precision, and it maps well to programmable graphics hardware.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"310 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114289171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 108
Ardeco: automatic region detection and conversion Ardeco:自动区域检测和转换
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/349-360
Grégory Lecot, B. Lévy
{"title":"Ardeco: automatic region detection and conversion","authors":"Grégory Lecot, B. Lévy","doi":"10.2312/EGWR/EGSR06/349-360","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/349-360","url":null,"abstract":"We present Ardeco, a new algorithm for image abstraction and conversion from bitmap images into vector graphics. Given a bitmap image, our algorithm automatically computes the set of vector primitives and gradients that best approximates the image. In addition, more details can be generated in user-selected important regions, defined from eye-tracking data or from an importance map painted by the user. Our algorithm is based on a new two-level variational parametric segmentation algorithm, minimizing Mumford and Shah's energy and operating on an intermediate triangulation, well adapted to the features of the image.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130322011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
Feature-aware texturing Feature-aware纹理
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/297-303
Ran Gal, O. Sorkine-Hornung, D. Cohen-Or
{"title":"Feature-aware texturing","authors":"Ran Gal, O. Sorkine-Hornung, D. Cohen-Or","doi":"10.2312/EGWR/EGSR06/297-303","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/297-303","url":null,"abstract":"We present a method for inhomogeneous 2D texture mapping guided by a feature mask, that preserves some regions of the image, such as foreground objects or other prominent parts. The method is able to arbitrarily warp a given image while preserving the shape of its features by constraining their deformation to be a similarity transformation. In particular, our method allows global or local changes to the aspect ratio of the texture without causing undesirable shearing to the features. The algorithmic core of our method is a particular formulation of the Laplacian editing technique, suited to accommodate similarity constraints on parts of the domain. The method is useful in digital imaging, texture design and any other applications involving image warping, where parts of the image have high familiarity and should retain their shape after modification.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"2010 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127339652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 210
Wavelet radiance transport for interactive indirect lighting 交互间接照明的小波辐射传输
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/161-171
Janne Kontkanen, Emmanuel Turquin, Nicolas Holzschuch, F. Sillion
{"title":"Wavelet radiance transport for interactive indirect lighting","authors":"Janne Kontkanen, Emmanuel Turquin, Nicolas Holzschuch, F. Sillion","doi":"10.2312/EGWR/EGSR06/161-171","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/161-171","url":null,"abstract":"Global illumination is a complex all-frequency phenomenon including subtle effects caused by indirect lighting. Computing global illumination interactively for dynamic lighting conditions has many potential applications, notably in architecture, motion pictures and computer games. It remains a challenging issue, despite the considerable amount of research work devoted to finding efficient methods. This paper presents a novel method for fast computation of indirect lighting; combined with a separate calculation of direct lighting, we provide interactive global illumination for scenes with diffuse and glossy materials, and arbitrarily distributed point light sources. To achieve this goal, we introduce three new tools: a 4D wavelet basis for concise radiance expression, an efficient hierarchical pre-computation of the Global Transport Operator representing the entire propagation of radiance in the scene in a single operation, and a run-time projection of direct lighting on to our wavelet basis. The resulting technique allows unprecedented freedom in the interactive manipulation of lighting for static scenes.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125696240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Silhouette texture 轮廓纹理
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/285-296
Hongzhi Wu, Li-Yi Wei, Xi Wang, B. Guo
{"title":"Silhouette texture","authors":"Hongzhi Wu, Li-Yi Wei, Xi Wang, B. Guo","doi":"10.2312/EGWR/EGSR06/285-296","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/285-296","url":null,"abstract":"Using coarse meshes with textures and/or normal maps to represent detailed meshes often results in poor visual quality along silhouettes. To tackle this problem, we introduce silhouette texture, a new data structure for capturing and reconstructing the silhouettes of detailed meshes. In addition to the recording of color and normal fields in traditional methods, we sample information that represents the original silhouettes and pack it into a three dimensional texture. In the rendering stage, our algorithm extracts relevant information from the texture to rebuild the silhouettes for any perspective view. Unlike previous work, our approach is based on GPU and could achieve high rendering performance. Moreover, both exterior and interior silhouettes are processed for better approximation quality. In addition to rendering acceleration, our algorithm also enables detailed silhouette visualization with minimum geometric complexity.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"13 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130052467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Sparse lumigraph relighting by illumination and reflectance estimation from multi-view images 基于多视点图像照度和反射率估计的稀疏光场重照明
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/041-050
Tianli Yu, Hongcheng Wang, N. Ahuja, Wei-Chao Chen
{"title":"Sparse lumigraph relighting by illumination and reflectance estimation from multi-view images","authors":"Tianli Yu, Hongcheng Wang, N. Ahuja, Wei-Chao Chen","doi":"10.2312/EGWR/EGSR06/041-050","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/041-050","url":null,"abstract":"We present a novel relighting approach that does not assume that the illumination is known or controllable. Instead, we estimate the illumination and texture from given multi-view images captured under a single illumination setting, given the object shape. We rely on the viewpoint-dependence of surface reflectance to resolve the usual texture-illumination ambiguity. The task of obtaining the illumination and texture models is formulated as the decomposition of the observed surface radiance tensor into the product of a light transport tensor, and illumination and texture matrices. We estimate both the illumination and texture at the same time by solving a system of bilinear equations. To reduce estimation error due to imperfect input surface geometry, we also perform a multi-scale discrete search on the specular surface normal. Our results on synthetic and real data indicate that we can estimate the illumination, the diffuse as well as the specular components of the surface texture map (up to a global scaling ambiguity). Our approach allows more flexibilities in rendering novel images, such as view changing, and light and texture editing.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124632452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Practical, real-time studio matting using dual imagers 实用,实时工作室抠图使用双成像仪
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/235-244
M. McGuire, W. Matusik, W. Yerazunis
{"title":"Practical, real-time studio matting using dual imagers","authors":"M. McGuire, W. Matusik, W. Yerazunis","doi":"10.2312/EGWR/EGSR06/235-244","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/235-244","url":null,"abstract":"This paper presents a practical system for capturing high-resolution video mattes using cameras that contain two imagers on one optical axis. The dual imagers capture registered frames that differ only by defocus or polarization at pixels corresponding to special background 'gray-screens.' This system eliminates color spill and other drawbacks of blue-screen matting while preserving many of its desirable properties (e.g., unassisted, real-time, natural illumination) over more recent methods, and achieving higher precision output for Bayer-filter digital cameras. Because two imagers capture more information than one, we are able to automatically process scenes that would require manual retouching with blue-screen matting.\u0000 The dual-imager system successfully pulls mattes for scenes containing thin hair, liquids, glass, and reflective objects; mirror reflections produce incorrect results. We show result comparisons for these scenes against blue-screen matting and describe materials and patterns for building a capture system.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116138720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Handheld pixels 手持像素
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/017-018
P. Nordlund
{"title":"Handheld pixels","authors":"P. Nordlund","doi":"10.2312/EGWR/EGSR06/017-018","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/017-018","url":null,"abstract":"During this decade, pixels have become mobile. Cell phones, PDAs, handheld gaming consoles and other similar devices start to have color displays by standard and color displays are hungry for high-quality graphics. QVGA and VGA display resolutions are common, requiring dedicated hardware for graphics acceleration. Color displays and open platforms also invite games and other applications, which build on the availability of robust graphics. Handheld graphics acceleration is close to its desktop and games console counterparts Ű with content running on an embedded version of OpenGL, the OpenGL ES 2.0, vertex and pixel shaders are a requirement. Floating-point accuracy, lots of texture surfaces, plenty of performance Ű handheld pixels are of good quality and there are lots of them. Handheld gaming drives the handheld 3D graphics performance, but unlike on desktops, vector graphics hardware acceleration will become an even widely spread requirement on new handheld platforms. Applications such as the device's main graphical user interface and interactive maps are driving these requirements. In addition to performance, a strong driver for vector graphics on handhelds is image quality.\u0000 The first handheld devices, including cell phones, with dedicated 3D graphics accelerators have already hit the market. By 2010, a large number of new cell phones and PDAs will be enabled with hardware vector- and 3D graphics acceleration. The volume of graphics acceleration enabled silicon chips shipping for handheld devices is expected to be significantly higher than for desktop PCs and gaming consoles. This creates a lucrative platform for game and application developers who want to develop handheld content with high-quality graphics.\u0000 As there are numerous different handheld devices, the industry is fighting against fragmentation Ű widely adopted platforms must be created to enable universal content development across a wide range of platforms and end devices Ű the platform race is already on.\u0000 All in all, the industry is busy creating all the essential components to bring high-quality programmable pixels to handheld devices. Content developers are already up-to speed to provide winning content for these devices. All in all, the future of handheld pixels looks rosy!","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133479901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual chatter in the real world 现实世界中的视觉喋喋不休
Eurographics Symposium on Rendering Pub Date : 2006-06-26 DOI: 10.2312/EGWR/EGSR06/011-016
S. Nayar, Gurunandan Krishnan, M. Grossberg, R. Raskar
{"title":"Visual chatter in the real world","authors":"S. Nayar, Gurunandan Krishnan, M. Grossberg, R. Raskar","doi":"10.2312/EGWR/EGSR06/011-016","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/011-016","url":null,"abstract":"When a scene is lit by a source of light, the radiance of each point in the scene can be viewed as having two components, namely, direct and global. Recently, an efFIcient separation method has been proposed that uses high frequency illumination patterns to measure the direct and global components of a scene. The global component could arise from not only interreflections but also subsurface scattering within translucent surfaces and volumetric scattering by participating media. In this paper, we use this method to measure the direct and global components of a variety of natural and man-made materials. The computed direct and global images provide interesting insights into the scattering properties of common real-world materials. We have also measured the two components for a 3D texture as a function of lighting direction. This experiment shows that the global component of a BTF tends vary smoothly with respect to the lighting direction compared to the direct component of the BTF. Finally, we apply the separation method to a translucent object for different imaging and illumination scales (resolutions). The results obtained show how the BSSDRF of the object gradually reduces to a BRDF as one goes from fine to coarse scale. All the measurement results reported here, as well as several others, can be viewed as high resolution images at http://www1.cs.columbia.edu/CAVE/projects/separation/separation.php.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116621356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信