Eurographics Symposium on Rendering最新文献

筛选
英文 中文
Real-time multiple scattering in participating media with illumination networks 光照网络参与介质中的实时多重散射
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/277-282
László Szirmay-Kalos, M. Sbert, Tamás Umenhoffer
{"title":"Real-time multiple scattering in participating media with illumination networks","authors":"László Szirmay-Kalos, M. Sbert, Tamás Umenhoffer","doi":"10.2312/EGWR/EGSR05/277-282","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/277-282","url":null,"abstract":"This paper proposes a real-time method to compute multiple scattering in non-homogeneous participating media having general phase functions. The volume represented by a particle system is supposed to be static, but the lights and the camera may move. Lights can be arbitrarily close to the volume and can even be inside. Real-time performance is achieved by reusing light scattering paths that are generated with global line bundles traced in sample directions in a preprocessing phase. For each particle we obtain those other particles which can be seen in one of the sample directions, and their radiances toward the given particle. This information is stored in an illumination network that allows the fast iteration of the volumetric rendering equation. The illumination network can be stored in two-dimensional arrays indexed by the particles and the directions, respectively. Interpreting these two-dimensional arrays as texture maps, the iteration of the scattering steps can be efficiently executed by the graphics hardware, and the illumination can spread over the media in real-time.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126327369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Fast exact from-region visibility in urban scenes 在城市场景中快速精确的区域可见性
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/223-230
Jiří Bittner, Peter Wonka, M. Wimmer
{"title":"Fast exact from-region visibility in urban scenes","authors":"Jiří Bittner, Peter Wonka, M. Wimmer","doi":"10.2312/EGWR/EGSR05/223-230","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/223-230","url":null,"abstract":"We present a fast exact from-region visibility algorithm for 2.5D urban scenes. The algorithm uses a subdivision of line space for identifying visibility interactions in a 2D footprint of the scene. Visibility in the remaining vertical dimension is resolved by testing for the existence of lines stabbing sequences of virtual portals. Our results show that exact analytic from-region visibility in urban scenes can be computed at times comparable or even superior to recent conservative methods.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125454671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A hybrid monte carlo method for accurate and efficient subsurface scattering 一种混合蒙特卡罗方法用于精确和高效的地下散射
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/283-290
Hongsong Li, F. Pellacini, K. Torrance
{"title":"A hybrid monte carlo method for accurate and efficient subsurface scattering","authors":"Hongsong Li, F. Pellacini, K. Torrance","doi":"10.2312/EGWR/EGSR05/283-290","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/283-290","url":null,"abstract":"Subsurface scattering is a fundamental aspect of surface appearance responsible for the characteristic look of many materials. Monte Carlo path tracing techniques can be employed with high accuracy to simulate the scattering of light inside a translucent object, albeit at the cost of long computation times. In a seminal work, Jensen et al. [JMLH01] presented a more efficient technique to simulate subsurface scattering that, while producing accurate results for translucent, optically-thick, materials, exhibits artifacts for semi-transparent, optically-thin, ones, especially in regions of high-curvature.\u0000 This paper presents a hybrid Monte Carlo technique capable of simulating a wide range of materials exhibiting subsurface scattering, from translucent to semi-transparent ones, with an accuracy comparable to Monte Carlo techniques but at a much lower computational cost. Our approach utilizes a Monte Carlo path tracing approach for the first several scattering events, in order to estimate the directional-diffuse component of subsurface scattering, and switches to a dipole diffusion approximation only when the path penetrates deeply enough into the surface. By combining the accuracy of Monte Carlo integration with the efficiency of the dipole diffusion approximation, our hybrid method produces results as accurate as full Monte Carlo simulations at a speed comparable to the Jensen et al. approximation, thus extending its usefulness to a much wider range of materials.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130415444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Table-top computed lighting for practical digital photography 桌面计算照明实用数码摄影
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/165-172
Ankit Mohan, J. Tumblin, Bobby Bodenheimer, C. Grimm, Reynold J. Bailey
{"title":"Table-top computed lighting for practical digital photography","authors":"Ankit Mohan, J. Tumblin, Bobby Bodenheimer, C. Grimm, Reynold J. Bailey","doi":"10.2312/EGWR/EGSR05/165-172","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/165-172","url":null,"abstract":"We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure, and use a camera to quickly record low-resolution photos as the light scans the box interior. Optimization guided by interactive user sketching selects a small set of frames whose weighted sum best matches the target image. The system then repeats the lighting used in each of these frames, and constructs a high resolution result from re-photographed basis images. Unlike previous image-based relighting efforts, our method requires only one light source, yet can achieve high resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a hand-held light, and may be suitable for battery-powered, field photography equipment that fits in a backpack.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116624076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Experimental analysis of BRDF models BRDF模型的实验分析
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/117-126
A. Ngan, F. Durand, W. Matusik
{"title":"Experimental analysis of BRDF models","authors":"A. Ngan, F. Durand, W. Matusik","doi":"10.2312/EGWR/EGSR05/117-126","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/117-126","url":null,"abstract":"The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128030048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 509
Ray maps for global illumination 用于全局照明的光线贴图
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/043-054
V. Havran, Jiří Bittner, R. Herzog, H. Seidel
{"title":"Ray maps for global illumination","authors":"V. Havran, Jiří Bittner, R. Herzog, H. Seidel","doi":"10.2312/EGWR/EGSR05/043-054","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/043-054","url":null,"abstract":"We describe a novel data structure for representing light transport called ray map. The ray map extends the concept of photon maps: it stores not only photon impacts but the whole photon paths. We demonstrate the utility of ray maps for global illumination by eliminating boundary bias and reducing topological bias of density estimation in global illumination. Thanks to the elimination of boundary bias we could use ray maps for fast direct visualization with the image quality being close to that obtained by the expensive final gathering step. We describe in detail our implementation of the ray map using a lazily constructed kD-tree. We also present several optimizations bringing the ray map query performance close to the performance of the photon map.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125823596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Estimation of 3D faces and illumination from single photographs using a bilinear illumination model 利用双线性光照模型估计单张照片的三维人脸和光照
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/073-082
Jinho Lee, Hanspeter Pfister, B. Moghaddam, R. Machiraju
{"title":"Estimation of 3D faces and illumination from single photographs using a bilinear illumination model","authors":"Jinho Lee, Hanspeter Pfister, B. Moghaddam, R. Machiraju","doi":"10.2312/EGWR/EGSR05/073-082","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/073-082","url":null,"abstract":"3D Face modeling is still one of the biggest challenges in computer graphics. In this paper we present a novel framework that acquires the 3D shape, texture, pose and illumination of a face from a single photograph. Additionally, we show how we can recreate a face under varying illumination conditions. Or, essentially relight it. Using a custom-built face scanning system, we have collected 3D face scans and light reflection images of a large and diverse group of human subjects. We derive a morphable face model for 3D face shapes and accompanying textures by transforming the data into a linear vector sub-space. The acquired images of faces under variable illumination are then used to derive a bilinear illumination model that spans 3D face shape and illumination variations. Using both models we, in turn, propose a novel fitting framework that estimates the parameters of the morphable model given a single photograph. Our framework can deal with complex face reflectance and lighting environments in an efficient and robust manner. In the results section of our paper, we compare our methods to existing ones and demonstrate its efficacy in reconstructing 3D face models when provided with a single photograph. We also provide several examples of facial relighting (on 2D images) by performing adequate decomposition of the estimated illumination using our framework.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114616244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Motion blur for textures by means of anisotropic filtering 通过各向异性滤波对纹理进行运动模糊
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/105-110
J. Loviscach
{"title":"Motion blur for textures by means of anisotropic filtering","authors":"J. Loviscach","doi":"10.2312/EGWR/EGSR05/105-110","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/105-110","url":null,"abstract":"The anisotropic filtering offered by current graphics hardware can be employed to apply motion blur to textures. The solution proposed here uses a standard texture together with a vertex and a pixel shader acting on a mesh with augmented vertex data. Our method generalizes the usual spatial anisotropic MIP mapping to also include temporal effects. It automatically processes any time series of affine 3D transformations of an object. The application fields include animations containing 2D lettering as well as objects such as spoke wheels that are cookie-cut from large polygons using an alpha channel. We present two different implementations of the technique.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128118120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Colorization by example 举例着色
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/201-210
Revital Ironi, D. Cohen-Or, Dani Lischinski
{"title":"Colorization by example","authors":"Revital Ironi, D. Cohen-Or, Dani Lischinski","doi":"10.2312/EGWR/EGSR05/201-210","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/201-210","url":null,"abstract":"We present a new method for colorizing grayscale images by transferring color from a segmented example image. Rather than relying on a series of independent pixel-level decisions, we develop a new strategy that attempts to account for the higher-level context of each pixel. The colorizations generated by our approach exhibit a much higher degree of spatial consistency, compared to previous automatic color transfer methods [WAM02]. We also demonstrate that our method requires considerably less manual effort than previous user-assisted colorization methods [LLW04].\u0000 Given a grayscale image to colorize, we first determine for each pixel which example segment it should learn its color from. This is done automatically using a robust supervised classification scheme that analyzes the low-level feature space defined by small neighborhoods of pixels in the example image. Next, each pixel is assigned a color from the appropriate region using a neighborhood matching metric, combined with spatial filtering for improved spatial coherence. Each color assignment is associated with a confidence value, and pixels with a sufficiently high confidence level are provided as \"micro-scribbles\" to the optimization-based colorization algorithm of Levin et al. [LLW04], which produces the final complete colorization of the image.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128299409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 351
Stippling and silhouettes rendering in geometry-image space 点画和轮廓绘制几何图像空间
Eurographics Symposium on Rendering Pub Date : 2005-06-29 DOI: 10.2312/EGWR/EGSR05/193-200
Xiaoru Yuan, Minh X. Nguyen, N. Zhang, Baoquan Chen
{"title":"Stippling and silhouettes rendering in geometry-image space","authors":"Xiaoru Yuan, Minh X. Nguyen, N. Zhang, Baoquan Chen","doi":"10.2312/EGWR/EGSR05/193-200","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR05/193-200","url":null,"abstract":"We present a novel non-photorealistic rendering method that performs all operations in a geometry-image domain. We first apply global conformal parameterization to the input geometry model and generate corresponding geometry images. Strokes and silhouettes are then computed in the geometry-image domain. The geometry-image space provides combined benefits of the existing image space and object space approaches. It allows us to take advantage of the regularity of 2D images and yet still have full access to the object geometry information. A wide range of image processing tools can be leveraged to assist various operations involved in achieving non-photorealistic rendering with coherence.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125747144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信