{"title":"A spectral BSSRDF for shading human skin","authors":"Craig Donner, H. Jensen","doi":"10.2312/EGWR/EGSR06/409-417","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/409-417","url":null,"abstract":"We present a novel spectral shading model for human skin. Our model accounts for both subsurface and surface scattering, and uses only four parameters to simulate the interaction of light with human skin. The four parameters control the amount of oil, melanin and hemoglobin in the skin, which makes it possible to match specific skin types. Using these parameters we generate custom wavelength dependent diffusion profiles for a two-layer skin model that account for subsurface scattering within the skin. These diffusion profiles are computed using convolved diffusion multipoles, enabling an accurate and rapid simulation of the subsurface scattering of light within skin. We combine the subsurface scattering simulation with a Torrance-Sparrow BRDF model to simulate the interaction of light with an oily layer at the surface of the skin. Our results demonstrate that this four parameter model makes it possible to simulate the range of natural appearance of human skin including African, Asian, and Caucasian skin types.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114872582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaurav Garg, Eino-Ville Talvala, M. Levoy, H. Lensch
{"title":"Symmetric photography: exploiting data-sparseness in reflectance fields","authors":"Gaurav Garg, Eino-Ville Talvala, M. Levoy, H. Lensch","doi":"10.2312/EGWR/EGSR06/251-262","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/251-262","url":null,"abstract":"We present a novel technique called symmetric photography to capture real world reflectance fields. The technique models the 8D reflectance field as a transport matrix between the 4D incident light field and the 4D exitant light field. It is a challenging task to acquire this transport matrix due to its large size. Fortunately, the transport matrix is symmetric and often data-sparse. Symmetry enables us to measure the light transport from two sides simultaneously, from the illumination directions and the view directions. Data-sparseness refers to the fact that sub-blocks of the matrix can be well approximated using low-rank representations. We introduce the use of hierarchical tensors as the underlying data structure to capture this data-sparseness, specifically through local rank-1 factorizations of the transport matrix. Besides providing an efficient representation for storage, it enables fast acquisition of the approximated transport matrix and fast rendering of images from the captured matrix. Our prototype acquisition system consists of an array of mirrors and a pair of coaxial projector and camera. We demonstrate the effectiveness of our system with scenes rendered from reflectance fields that were captured by our system. In these renderings we can change the viewpoint as well as relight using arbitrary incident light fields.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130883564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Overbeck, A. Ben-Artzi, R. Ramamoorthi, E. Grinspun
{"title":"Exploiting temporal coherence for incremental all-frequency relighting","authors":"R. Overbeck, A. Ben-Artzi, R. Ramamoorthi, E. Grinspun","doi":"10.2312/EGWR/EGSR06/151-160","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/151-160","url":null,"abstract":"Precomputed radiance transfer (PRT) enables all-frequency relighting with complex illumination, materials and shadows. To achieve real-time performance, PRT exploits angular coherence in the illumination, and spatial coherence in the light transport. Temporal coherence of the lighting from frame to frame is an important, but unexplored additional form of coherence for PRT. In this paper, we develop incremental methods for approximating the differences in lighting between consecutive frames. We analyze the lighting wavelet decomposition over typical motion sequences, and observe differing degrees of temporal coherence across levels of the wavelet hierarchy. To address this, our algorithm treats each level separately, adapting to available coherence. The proposed method is orthogonal to other forms of coherence, and can be added to almost any all-frequency PRT algorithm with minimal implementation, computation or memory overhead. We demonstrate our technique within existing codes for nonlinear wavelet approximation, changing view with BRDF factorization, and clustered PCA. Exploiting temporal coherence of dynamic lighting yields a 3×-4× performance improvement, e.g., all-frequency effects are achieved with 30 wavelet coefficients per frame for the lighting, about the same as low-frequency spherical harmonic methods. Distinctly, our algorithm smoothly converges to the exact result within a few frames of the lighting becoming static.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130356217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient wavelet rotation for environment map rendering","authors":"Rui Wang, Ren Ng, D. Luebke, G. Humphreys","doi":"10.2312/EGWR/EGSR06/173-182","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/173-182","url":null,"abstract":"Real-time shading with environment maps requires the ability to rotate the global lighting to each surface point's local coordinate frame. Although extensive previous work has studied rotation of functions represented by spherical harmonics, little work has investigated efficient rotation of wavelets. Wavelets are superior at approximating high frequency signals such as detailed high dynamic range lighting and very shiny BRDFs, but present difficulties for interactive rendering due to the lack of an analytic solution for rotation. In this paper we present an efficient computational solution for wavelet rotation using precomputed matrices. Each matrix represents the linear transformation of source wavelet bases defined in the global coordinate frame to target wavelet bases defined in sampled local frames. Since wavelets have compact support, these matrices are very sparse, enabling efficient storage and fast computation at run-time. In this paper, we focus on the application of our technique to interactive environment map rendering. We show that using these matrices allows us to evaluate the integral of dynamic lighting with dynamic BRDFs at interactive rates, incorporating efficient non-linear approximation of both illumination and reflection. Our technique improves on previous work by eliminating the need for prefiltering environment maps, and is thus significantly faster for accurate rendering of dynamic environment lighting with high frequency reflection effects.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127767245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Mertens, J. Kautz, Jiawen Chen, P. Bekaert, F. Durand
{"title":"Texture transfer using geometry correlation","authors":"T. Mertens, J. Kautz, Jiawen Chen, P. Bekaert, F. Durand","doi":"10.2312/EGWR/EGSR06/273-284","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/273-284","url":null,"abstract":"Texture variation on real-world objects often correlates with underlying geometric characteristics and creates a visually rich appearance. We present a technique to transfer such geometry-dependent texture variation from an example textured model to new geometry in a visually consistent way. It captures the correlation between a set of geometric features, such as curvature, and the observed diffuse texture. We perform dimensionality reduction on the overcomplete feature set which yields a compact guidance field that is used to drive a spatially varying texture synthesis model. In addition, we introduce a method to enrich the guidance field when the target geometry strongly differs from the example. Our method transfers elaborate texture variation that follows geometric features, which gives 3D models a compelling photorealistic appearance.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128751937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient multi-view rasterization architecture","authors":"J. Hasselgren, T. Akenine-Möller","doi":"10.2312/EGWR/EGSR06/061-072","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/061-072","url":null,"abstract":"TV have been designed and built. However, these displays have received relatively little attention in the context of real-time computer graphics. We present a novel rasterization architecture that rasterizes each triangle to multiple views simultaneously. When determining which tile in which view to rasterize next, we use an efficiency measure that estimates which tile is expected to get the most hits in the texture cache. Once that tile has been rasterized, the efficiency measure is updated, and a new tile and view are selected. Our traversal algorithm provides significant reductions in the amount of texture fetches, and bandwidth gains on the order of a magnitude have been observed. We also present an approximate rasterization algorithm that avoids pixel shader evaluations for a substantial amount (up to 95%) of fragments and still maintains high image quality.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125256729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bidirectional instant radiosity","authors":"B. Segovia, J. Iehl, R. Mitanchey, B. Péroche","doi":"10.2312/EGWR/EGSR06/389-397","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/389-397","url":null,"abstract":"This paper presents a new sampling strategy to achieve interactive global illumination on one commodity computer. The goal is to propose an efficient numerical stochastic scheme which can be well adapted to a fast rendering algorithm. As we want to provide an efficient sampling strategy to handle difficult settings without sacrificing performance in common cases, we developed an extension of Instant Radiosity [Kel97] in the same way bidirectional path tracing is an extension of path or light tracing. Our idea is to build several estimators and to efficiently combine them to find a set of virtual point light sources which are relevant for the areas of the scene seen by the camera. The resulting algorithm is faster than classical solutions to global illumination. Using today graphics hardware, an interactive frame rate and the convergence of the scheme can be easily obtained in scenes with many light sources, glossy materials or difficult visibility problems.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125492653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel method for fast and high-quality rendering of hair","authors":"Songhua Xu, F. Lau, Hao Jiang, Yunhe Pan","doi":"10.2312/EGWR/EGSR06/331-341","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/331-341","url":null,"abstract":"This paper proposes a new rendering approach for hair. The model we use incorporates semantics-related information directly in the appearance modeling function which we call a Semantics-Aware Texture Function (SATF). This new appearance modeling function is well suited for constructing an off-line/on-line hybrid algorithm to achieve fast and high-quality rendering of hair. The off-line phase generates intermediate results in a database for sample geometries under different viewing and lighting conditions, which can be used to complete a large part of the overall computation and leaves only a few dynamic tasks to be performed on-line. We propose a model having four levels, from the whole hair volume to the very fine hair density level. We further employ an efficient disk-like structure to represent hair distributions inside a hair cluster. As the intermediate database carries opacity information, self-shadows can be easily generated. We present experiment results which clearly show that our methodology can indeed produce high quality rendering results efficiently. Supplementary materials and supporting demos can be found in our project website http://www.cs.hku.hk/˜songhua/hair-rendering/.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122787911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A low dimensional framework for exact polygon-to-polygon occlusion queries","authors":"D. Haumont, Otso Makinen, S. Nirenstein","doi":"10.2312/EGWR/EGSR05/211-222","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/211-222","url":null,"abstract":"Despite the importance of from-region visibility computation in computer graphics, efficient analytic methods are still lacking in the general 3D case. Recently, different algorithms have appeared that maintain occlusion as a complex of polytopes in Plücker space. However, they suffer from high implementation complexity, as well as high computational and memory costs, limiting their usefulness in practice.\u0000 In this paper, we present a new algorithm that simplifies implementation and computation by operating only on the skeletons of the polyhedra instead of the multi-dimensional face lattice usually used for exact occlusion queries in 3D. This algorithm is sensitive to complexity of the silhouette of each occluding object, rather than the entire polygonal mesh of each object. An intelligent feedback mechanism is presented that greatly enhances early termination by searching for apertures between query polygons. We demonstrate that our technique is several times faster than the state of the art.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127451154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Out of core photon-mapping for large buildings","authors":"D. Fradin, Daniel Méneveaux, S. Horna","doi":"10.2312/EGWR/EGSR05/065-072","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/065-072","url":null,"abstract":"This paper describes a new scheme for computing out-of-core global illumination in complex indoor scenes using a photon-mapping approach. Our method makes use of a cells-and-portals representation of the environment for preserving memory coherence and storing rays or photons. We have successfully applied our method to various buildings, composed of up to one billion triangles. As shown in the results, our method requires only a few hundred megabytes of memory for tracing more than 1.6 billion photons in large buildings.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124429915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}