Eurographics Symposium on Rendering最新文献

筛选
英文 中文
Bi-Directional Polarised Light Transport 双向偏振光传输
Eurographics Symposium on Rendering Pub Date : 2016-06-22 DOI: 10.2312/sre.20161215
Michal Mojzík, Tomás Skrivan, A. Wilkie, Jaroslav Křivánek
{"title":"Bi-Directional Polarised Light Transport","authors":"Michal Mojzík, Tomás Skrivan, A. Wilkie, Jaroslav Křivánek","doi":"10.2312/sre.20161215","DOIUrl":"https://doi.org/10.2312/sre.20161215","url":null,"abstract":"While there has been considerable applied research in computer graphics on polarisation rendering, no principled investigation of how the inclusion of polarisation information affects the mathematical formalisms that are used to describe light transport algorithms has been conducted so far. Simple uni-directional rendering techniques do not necessarily require such considerations: but for modern bi-directional light transport simulation algorithms, an in-depth solution is needed. \u0000 \u0000In this paper, we first define the transport equation for polarised light based on the Stokes Vector formalism. We then define a notion of polarised visual importance, and we show that it can be conveniently represented by a 4 x 4 matrix, similar to the Mueller matrices used to represent polarised surface reflectance. Based on this representation, we then define the adjoint transport equation for polarised importance. Additionally, we write down the path integral formulation for polarised light, and point out its salient differences from the usual formulation for light intensities. Based on the above formulations, we extend some recently proposed advanced light transport simulation algorithms to support polarised light, both in surface and volumetric transport. In doing that, we point out optimisation strategies that can be used to minimise the overhead incurred by including polarisation support into such algorithms.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133077344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Shape Depiction for Transparent Objects with Bucketed k-Buffer 具有桶形k缓冲的透明物体的形状描述
Eurographics Symposium on Rendering Pub Date : 2016-06-22 DOI: 10.2312/sre.20161207
D. Murray, Jérôme Baril, Xavier Granier
{"title":"Shape Depiction for Transparent Objects with Bucketed k-Buffer","authors":"D. Murray, Jérôme Baril, Xavier Granier","doi":"10.2312/sre.20161207","DOIUrl":"https://doi.org/10.2312/sre.20161207","url":null,"abstract":"Shading techniques are useful to deliver a better understanding of object shapes. When transparent objects are involved, depicting the shape characteristics of each surface is even more relevant. In this paper, we propose a method for rendering transparent scenes or objects using classical tools for shape depiction in real time. Our method provides an efficient way to compute screen space curvature on transparent objects by using a novel screen space representation of a scene derived from Order Independent Transparency techniques. Moreover, we propose a customizable stylization that modulates the transparency per fragment, according to its curvature and its depth, which can be adapted for various kinds of applications.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130314292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Single-shot Layered Reflectance Separation Using a Polarized Light Field Camera 使用偏振光场相机的单镜头分层反射分离
Eurographics Symposium on Rendering Pub Date : 2016-06-22 DOI: 10.2312/sre.20161204
Jaewon Kim, S. Izadi, A. Ghosh
{"title":"Single-shot Layered Reflectance Separation Using a Polarized Light Field Camera","authors":"Jaewon Kim, S. Izadi, A. Ghosh","doi":"10.2312/sre.20161204","DOIUrl":"https://doi.org/10.2312/sre.20161204","url":null,"abstract":"We present a novel computational photography technique for single shot separation of diffuse/specular reflectance as well as novel angular domain separation of layered reflectance. Our solution consists of a two-way polarized light field (TPLF) camera which simultaneously captures two orthogonal states of polarization. A single photograph of a subject acquired with the TPLF camera under polarized illumination then enables standard separation of diffuse (depolarizing) and polarization preserving specular reflectance using light field sampling. We further demonstrate that the acquired data also enables novel angular separation of layered reflectance including separation of specular reflectance and single scattering in the polarization preserving component, and separation of shallow scattering from deep scattering in the depolarizing component. We apply our approach for efficient acquisition of facial reflectance including diffuse and specular normal maps, and novel separation of photometric normals into layered reflectance normals for layered facial renderings. We demonstrate our proposed single shot layered reflectance separation to be comparable to an existing multi-shot technique that relies on structured lighting while achieving separation results under a variety of illumination conditions.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114206230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Point-Based Light Transport for Participating Media with Refractive Boundaries 具有折射边界的参与介质的点光传输
Eurographics Symposium on Rendering Pub Date : 2016-06-22 DOI: 10.2312/sre.20161216
Beibei Wang, J. Gascuel, Nicolas Holzschuch
{"title":"Point-Based Light Transport for Participating Media with Refractive Boundaries","authors":"Beibei Wang, J. Gascuel, Nicolas Holzschuch","doi":"10.2312/sre.20161216","DOIUrl":"https://doi.org/10.2312/sre.20161216","url":null,"abstract":"Illumination effects in translucent materials are a combination of several physical phenomena: absorption and scattering inside the material, refraction at its surface. Because refraction can focus light deep inside the material, where it will be scattered, practical illumination simulation inside translucent materials is difficult. In this paper, we present an a Point-Based Global Illumination method for light transport on translucent materials with refractive boundaries. We start by placing volume light samples inside the translucent material and organising them into a spatial hierarchy. At rendering, we gather light from these samples for each camera ray. We compute separately the samples contributions to single, double and multiple scattering, and add them. Our approach provides high-quality results, comparable to the state of the art, with significant speed-ups (from 9 x to 60 x depending on scene complexity) and a much smaller memory footprint.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134375647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Local Shape Editing at the Compositing Stage 合成阶段的局部形状编辑
Eurographics Symposium on Rendering Pub Date : 2016-06-22 DOI: 10.2312/SRE.20161206
C. J. Zubiaga, Gaël Guennebaud, Romain Vergne, Pascal Barla
{"title":"Local Shape Editing at the Compositing Stage","authors":"C. J. Zubiaga, Gaël Guennebaud, Romain Vergne, Pascal Barla","doi":"10.2312/SRE.20161206","DOIUrl":"https://doi.org/10.2312/SRE.20161206","url":null,"abstract":"Modern compositing software permit to linearly recombine different 3D rendered outputs (e.g., diffuse and reflection shading) in post-process, providing for simple but interactive appearance manipulations. Renderers also routinely provide auxiliary buffers (e.g., normals, positions) that may be used to add local light sources or depth-of-field effects at the compositing stage. These methods are attractive both in product design and movie production, as they allow designers and technical directors to test different ideas without having to re-render an entire 3D scene. \u0000 \u0000We extend this approach to the editing of local shape: users modify the rendered normal buffer, and our system automatically modifies diffuse and reflection buffers to provide a plausible result. Our method is based on the reconstruction of a pair of diffuse and reflection prefiltered environment maps for each distinct object/material appearing in the image. We seamlessly combine the reconstructed buffers in a recompositing pipeline that works in real-time on the GPU using arbitrarily modified normals.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124788408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MatCap Decomposition for Dynamic Appearance Manipulation 动态外观处理的MatCap分解
Eurographics Symposium on Rendering Pub Date : 2015-06-24 DOI: 10.2312/sre.20151163
C. J. Zubiaga, A. Muñoz, Laurent Belcour, C. Bosch, Pascal Barla
{"title":"MatCap Decomposition for Dynamic Appearance Manipulation","authors":"C. J. Zubiaga, A. Muñoz, Laurent Belcour, C. Bosch, Pascal Barla","doi":"10.2312/sre.20151163","DOIUrl":"https://doi.org/10.2312/sre.20151163","url":null,"abstract":"In sculpting software, MatCaps (a shorthand for \"Material Capture\") are often used by artists as a simple and efficient way to design appearance. Similar to LitSpheres, they convey material appearance into a single image of a sphere, which can be easily transferred to an individual 3D object. Their main purpose is to capture plausible material appearance without having to specify lighting and material separately. However, this also restricts their usability, since material or lighting cannot later be modified independently. Manipulations as simple as rotating lighting with respect to the view are not possible. In this paper, we show how to decompose a MatCap into a new representation that permits dynamic appearance manipulation. We consider that the material of the depicted sphere act as a filter in the image, and we introduce an algorithm that estimates a few relevant filter parameters interactively. We show that these parameters are sufficient to convert the input MatCap into our new representation, which affords real-time appearance manipulations through simple image re-filtering operations. This includes lighting rotations, the painting of additional reflections, material variations, selective color changes and silhouette effects that mimic Fresnel or asperity scattering.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115172802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Efficient Visibility Heuristics for kd-trees Using the RTSAH 基于RTSAH的kd树可视化启发式算法
Eurographics Symposium on Rendering Pub Date : 2015-06-24 DOI: 10.2312/sre.20151164
Matthias Moulin, Niels Billen, P. Dutré
{"title":"Efficient Visibility Heuristics for kd-trees Using the RTSAH","authors":"Matthias Moulin, Niels Billen, P. Dutré","doi":"10.2312/sre.20151164","DOIUrl":"https://doi.org/10.2312/sre.20151164","url":null,"abstract":"Acceleration data structures such as kd-trees aim at reducing the per-ray cost which is crucial for rendering performance. The de-facto standard for constructing kd-trees, the Surface Area Heuristic (SAH), does not take ray termination into account and instead assumes rays never hit a geometric primitive. The Ray Termination Surface Area Heuristic (RTSAH) is a cost metric originally used for determining the traversal order of the voxels for occlusion rays that takes ray termination into account. We adapt this RTSAH to building kd-trees that aim at reducing the per-ray cost of rays. Our build procedure has the same overall computational complexity and considers the same finite set of splitting planes as the SAH. By taking ray termination into account, we favor cutting off child voxels which are not or hardly visible to each other. This results in fundamentally different and more qualitative kd-trees compared to the SAH.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114789812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Color Clipping and Over-exposure Correction 颜色裁剪和过度曝光校正
Eurographics Symposium on Rendering Pub Date : 2015-06-23 DOI: 10.2312/sre.20151169
M. Abebe, T. Pouli, J. Kervec, M. Larabi
{"title":"Color Clipping and Over-exposure Correction","authors":"M. Abebe, T. Pouli, J. Kervec, M. Larabi","doi":"10.2312/sre.20151169","DOIUrl":"https://doi.org/10.2312/sre.20151169","url":null,"abstract":"Limitations of the camera or extreme contrast in scenes can lead to clipped areas in captured images. Irrespective of the cause, color clipping and over-exposure lead to loss of texture and detail, impacting the color appearance and visual quality of the image. We propose a new over-exposure and clipping correction method, which relies on the existing correlation between RGB channels of color images to recover clipped information. Using a novel region grouping approach, clipped regions are coherently treated both spatially and temporally. To reconstruct over-exposed areas where all channels are clipped we employ a brightness profile reshaping scheme, which aims to preserve the appearance of highlights, while boosting local brightness. Our method is evaluated using objective metrics as well as a subjective study based on an ITU standardized protocol, showing that our correction leads to improved results compared to previous related techniques. We explore several potential applications of our method, including extending to video as well as using it as a preprocessing step prior to inverse tone mapping.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128749950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Gradient-Domain Bidirectional Path Tracing 梯度域双向路径跟踪
Eurographics Symposium on Rendering Pub Date : 2015-06-22 DOI: 10.2312/sre.20151168
Marco Manzi, M. Kettunen, M. Aittala, J. Lehtinen, F. Durand, Matthias Zwicker
{"title":"Gradient-Domain Bidirectional Path Tracing","authors":"Marco Manzi, M. Kettunen, M. Aittala, J. Lehtinen, F. Durand, Matthias Zwicker","doi":"10.2312/sre.20151168","DOIUrl":"https://doi.org/10.2312/sre.20151168","url":null,"abstract":"Gradient-domain path tracing has recently been introduced as an efficient realistic image synthesis algorithm. This paper introduces a bidirectional gradient-domain sampler that outperforms traditional bidirectional path tracing often by a factor of two to five in terms of squared error at equal render time. It also improves over unidirectional gradient-domain path tracing in challenging visibility conditions, similarly as conventional bidirectional path tracing improves over its unidirectional counterpart. Our algorithm leverages a novel multiple importance sampling technique and an efficient implementation of a high-quality shift mapping suitable for bidirectional path tracing. We demonstrate the versatility of our approach in several challenging light transport scenarios.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114747502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Real-time multi-perspective rendering on graphics hardware 在图形硬件上的实时多视角渲染
Eurographics Symposium on Rendering Pub Date : 2006-07-30 DOI: 10.2312/EGWR/EGSR06/093-102
Xian Hou, Li-Yi Wei, H. Shum, B. Guo
{"title":"Real-time multi-perspective rendering on graphics hardware","authors":"Xian Hou, Li-Yi Wei, H. Shum, B. Guo","doi":"10.2312/EGWR/EGSR06/093-102","DOIUrl":"https://doi.org/10.2312/EGWR/EGSR06/093-102","url":null,"abstract":"Multi-perspective rendering has a variety of applications; examples include lens refraction, curved mirror re- flection, caustics, as well depiction and visualization. However, multi-perspective rendering is not yet practical on polygonal graphics hardware, which so far has utilized mostly single-perspective (pin-hole or orthographic) projections.\u0000 In this paper, we present a methodology for real-time multi-perspective rendering on polygonal graphics hardware. Our approach approximates a general multi-perspective projection surface (such as a curved mirror and lens) via a piecewise-linear triangle mesh, upon which each triangle is a simple multi-perspective camera, parameterized by three rays at triangle vertices. We derive analytic formula showing that each triangle projection can be implemented as a pair of vertex and fragment programs on programmable graphics hardware. We demonstrate real-time performance of a variety of applications enabled by our technique, including reflection, refraction, caustics, and visualization.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131555873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信