Eurographics Symposium on Rendering最新文献

筛选
英文 中文
A Learned Radiance-Field Representation for Complex Luminaires 复杂灯具的学习光场表示
Eurographics Symposium on Rendering Pub Date : 2022-07-11 DOI: 10.2312/sr.20221155
J. Condor, A. Jarabo
{"title":"A Learned Radiance-Field Representation for Complex Luminaires","authors":"J. Condor, A. Jarabo","doi":"10.2312/sr.20221155","DOIUrl":"https://doi.org/10.2312/sr.20221155","url":null,"abstract":"We propose an efficient method for rendering complex luminaires using a high-quality octree-based representation of the luminaire emission. Complex luminaires are a particularly challenging problem in rendering, due to their caustic light paths inside the luminaire. We reduce the geometric complexity of luminaires by using a simple proxy geometry and encode the visually-complex emitted light field by using a neural radiance field. We tackle the multiple challenges of using NeRFs for representing luminaires, including their high dynamic range, high-frequency content and null-emission areas, by proposing a specialized loss function. For rendering, we distill our luminaires' NeRF into a Plenoctree, which we can be easily integrated into traditional rendering systems. Our approach allows for speed-ups of up to 2 orders of magnitude in scenes containing complex luminaires introducing minimal error.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128566267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting 用于人像视图合成和重光照的神经光传输场
Eurographics Symposium on Rendering Pub Date : 2021-07-26 DOI: 10.2312/sr.20211299
Tiancheng Sun, Kai-En Lin, Sai Bi, Zexiang Xu, R. Ramamoorthi
{"title":"NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting","authors":"Tiancheng Sun, Kai-En Lin, Sai Bi, Zexiang Xu, R. Ramamoorthi","doi":"10.2312/sr.20211299","DOIUrl":"https://doi.org/10.2312/sr.20211299","url":null,"abstract":"Human portraits exhibit various appearances when observed from different views under different lighting conditions. We can easily imagine how the face will look like in another setup, but computer algorithms still fail on this problem given limited observations. To this end, we present a system for portrait view synthesis and relighting: given multiple portraits, we use a neural network to predict the light-transport field in 3D space, and from the predicted Neural Light-transport Field (NeLF) produce a portrait from a new camera view under a new environmental lighting. Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions. Our method achieves simultaneous view synthesis and relighting given multi-view portraits as the input, and achieves state-of-the-art results.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126888923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Single-image Full-body Human Relighting 单图像全身人体照明
Eurographics Symposium on Rendering Pub Date : 2021-07-15 DOI: 10.2312/sr.20211300
Manuel Lagunas, Xin Sun, Jimei Yang, Ruben Villegas, Jianming Zhang, Zhixin Shu, B. Masiá, Diego Gutierrez
{"title":"Single-image Full-body Human Relighting","authors":"Manuel Lagunas, Xin Sun, Jimei Yang, Ruben Villegas, Jianming Zhang, Zhixin Shu, B. Masiá, Diego Gutierrez","doi":"10.2312/sr.20211300","DOIUrl":"https://doi.org/10.2312/sr.20211300","url":null,"abstract":"We present a single-image data-driven method to automatically relight images with full-body humans in them. Our framework is based on a realistic scene decomposition leveraging precomputed radiance transfer (PRT) and spherical harmonics (SH) lighting. In contrast to previous work, we lift the assumptions on Lambertian materials and explicitly model diffuse and specular reflectance in our data. Moreover, we introduce an additional light-dependent residual term that accounts for errors in the PRT-based image reconstruction. We propose a new deep learning architecture, tailored to the decomposition performed in PRT, that is trained using a combination of L1, logarithmic, and rendering losses. Our model outperforms the state of the art for full-body human relighting both with synthetic images and photographs.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132695467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field NeuLF:基于神经4D光场的高效新型视图合成
Eurographics Symposium on Rendering Pub Date : 2021-05-15 DOI: 10.2312/sr.20221156
Celong Liu, Zhong Li, Junsong Yuan, Yi Xu
{"title":"NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field","authors":"Celong Liu, Zhong Li, Junsong Yuan, Yi Xu","doi":"10.2312/sr.20221156","DOIUrl":"https://doi.org/10.2312/sr.20221156","url":null,"abstract":"In this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a two-plane parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a 4D function that maps 4D coordinates to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Per-ray depth can be optionally predicted by the network, thus enabling applications such as auto refocus. Our novel view synthesis results are comparable to the state-of-the-arts, and even superior in some challenging scenes with refraction and reflection. We achieve this while maintaining an interactive frame rate and a small memory footprint.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115606899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Appearance-Driven Automatic 3D Model Simplification 外观驱动的自动3D模型简化
Eurographics Symposium on Rendering Pub Date : 2021-04-08 DOI: 10.2312/sr.20211293
J. Hasselgren, Jacob Munkberg, J. Lehtinen, M. Aittala, S. Laine
{"title":"Appearance-Driven Automatic 3D Model Simplification","authors":"J. Hasselgren, Jacob Munkberg, J. Lehtinen, M. Aittala, S. Laine","doi":"10.2312/sr.20211293","DOIUrl":"https://doi.org/10.2312/sr.20211293","url":null,"abstract":"We present a suite of techniques for jointly optimizing triangle meshes and shading models to match the appearance of reference scenes. This capability has a number of uses, including appearance-preserving simplification of extremely complex assets, conversion between rendering systems, and even conversion between geometric scene representations. We follow and extend the classic analysis-by-synthesis family of techniques: enabled by a highly efficient differentiable renderer and modern nonlinear optimization algorithms, our results are driven to minimize the image-space difference to the target scene when rendered in similar viewing and lighting conditions. As the only signals driving the optimization are differences in rendered images, the approach is highly general and versatile: it easily supports many different forward rendering models such as normal mapping, spatially-varying BRDFs, displacement mapping, etc. Supervision through images only is also key to the ability to easily convert between rendering systems and scene representations. We output triangle meshes with textured materials to ensure that the models render efficiently on modern graphics hardware and benefit from, e.g., hardware-accelerated rasterization, ray tracing, and filtered texture lookups. Our system is integrated in a small Python code base, and can be applied at high resolutions and on large models. We describe several use cases, including mesh decimation, level of detail generation, seamless mesh filtering and approximations of aggregate geometry.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129969816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Adaptive Multi-view Path Tracing 自适应多视图路径跟踪
Eurographics Symposium on Rendering Pub Date : 2019-07-10 DOI: 10.2312/sr.20191217
Basile Fraboni, J. Iehl, V. Nivoliers, Guillaume Bouchard
{"title":"Adaptive Multi-view Path Tracing","authors":"Basile Fraboni, J. Iehl, V. Nivoliers, Guillaume Bouchard","doi":"10.2312/sr.20191217","DOIUrl":"https://doi.org/10.2312/sr.20191217","url":null,"abstract":"Rendering photo-realistic image sequences using path tracing and Monte Carlo integration often requires sampling a large number of paths to get converged results. In the context of rendering multiple views or animated sequences, such sampling can be highly redundant. Several methods have been developed to share sampled paths between spatially or temporarily similar views. However, such sharing is challenging since it can lead to bias in the final images. Our contribution is a Monte Carlo sampling technique which generates paths, taking into account several cameras. First, we sample the scene from all the cameras to generate hit points. Then, an importance sampling technique generates bouncing directions which are shared by a subset of cameras. This set of hit points and bouncing directions is then used within a regular path tracing solution. For animated scenes, paths remain valid for a fixed time only, but sharing can still occur between cameras as long as their exposure time intervals overlap. We show that our technique generates less noise than regular path tracing and does not introduce noticeable bias.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132644747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition 基于内禀分解的深度混合真实与合成训练
Eurographics Symposium on Rendering Pub Date : 2018-07-30 DOI: 10.2312/SRE.20181172
Sai Bi, N. Kalantari, R. Ramamoorthi
{"title":"Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition","authors":"Sai Bi, N. Kalantari, R. Ramamoorthi","doi":"10.2312/SRE.20181172","DOIUrl":"https://doi.org/10.2312/SRE.20181172","url":null,"abstract":"Intrinsic image decomposition is the process of separating the reflectance and shading layers of an image, which is a challenging and underdetermined problem. In this paper, we propose to systematically address this problem using a deep convolutional neural network (CNN). Although deep learning (DL) has been recently used to handle this application, the current DL methods train the network only on synthetic images as obtaining ground truth reflectance and shading for real images is difficult. Therefore, these methods fail to produce reasonable results on real images and often perform worse than the non-DL techniques. We overcome this limitation by proposing a novel hybrid approach to train our network on both synthetic and real images. Specifically, in addition to directly supervising the network using synthetic images, we train the network by enforcing it to produce the same reflectance for a pair of images of the same real-world scene with different illuminations. Furthermore, we improve the results by incorporating a bilateral solver layer into our system during both training and test stages. Experimental results show that our approach produces better results than the state-of-the-art DL and non-DL methods on various synthetic and real datasets both visually and numerically.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123717576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Scalable Real-Time Shadows using Clustering and Metric Trees 使用聚类和度量树可扩展的实时阴影
Eurographics Symposium on Rendering Pub Date : 2018-07-02 DOI: 10.2312/sre.20181175
François Deves, F. Mora, L. Aveneau, D. Ghazanfarpour
{"title":"Scalable Real-Time Shadows using Clustering and Metric Trees","authors":"François Deves, F. Mora, L. Aveneau, D. Ghazanfarpour","doi":"10.2312/sre.20181175","DOIUrl":"https://doi.org/10.2312/sre.20181175","url":null,"abstract":"Real-time shadow algorithms based on geometry generally produce high quality shadows. Recent works have considerably improved their efficiency. However, scalability remains an issue because these methods strongly depend on the geometric complexity. This paper focuses on this problem. We present a new real-time shadow algorithm for non-deformable models that scales the geometric complexity. Our method groups triangles into clusters by precomputing bounding spheres or bounding capsules (line-swept spheres). At each frame, we build a ternary metric tree to partition the spheres and capsules according to their apparent distance from the light. Then, this tree is used as an acceleration data structure to determine the visibility of the light for each image point. While clustering allows to scale down the geometric complexity, metric trees allow to encode the bounding volumes of the clusters in a hierarchical data structure. Our experiments show that our approach remains efficient, including with models with over 70 million triangles.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133954684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Primary Sample Space Path Guiding 主要样本空间路径引导
Eurographics Symposium on Rendering Pub Date : 2018-07-01 DOI: 10.2312/sre.20181174
J. Guo, P. Bauszat, J. Bikker, E. Eisemann
{"title":"Primary Sample Space Path Guiding","authors":"J. Guo, P. Bauszat, J. Bikker, E. Eisemann","doi":"10.2312/sre.20181174","DOIUrl":"https://doi.org/10.2312/sre.20181174","url":null,"abstract":"Guiding path tracing in light transport simulation has been one of the practical choices for variance reduction in production rendering. For this purpose, typically structures in the spatial-directional domain are built. We present a novel scheme for unbiased path guiding. Different from existing methods, we work in primary sample space. We collect records of primary samples as well as the luminance that the resulting path contributes and build a multiple dimensional structure, from which we derive random numbers that are fed into the path tracer. This scheme is executed completely outside the rendering kernel. We demonstrate that this method is practical and efficient. We manage to reduce variance and zero radiance paths by only working in the primary sample space.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115494476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Diffuse-Specular Separation using Binary Spherical Gradient Illumination 使用二元球面梯度照明的漫射-镜面分离
Eurographics Symposium on Rendering Pub Date : 2018-07-01 DOI: 10.2312/sre.20181167
Christos Kampouris, S. Zafeiriou, A. Ghosh
{"title":"Diffuse-Specular Separation using Binary Spherical Gradient Illumination","authors":"Christos Kampouris, S. Zafeiriou, A. Ghosh","doi":"10.2312/sre.20181167","DOIUrl":"https://doi.org/10.2312/sre.20181167","url":null,"abstract":"We introduce a novel method for view-independent diffuse-specular separation of albedo and photometric normals without requiring polarization using binary spherical gradient illumination. The key idea is that with binary gradient illumination, a dielectric surface oriented towards the dark hemisphere exhibits pure diffuse reflectance while a surface oriented towards the bright hemisphere exhibits both diffuse and specular reflectance. We exploit this observation to formulate diffuse-specular separation based on color-space analysis of a surface's response to binary spherical gradients and their complements. The method does not impose restrictions on viewpoints and requires fewer photographs for multiview acquisition than polarized spherical gradient illumination. We further demonstrate an efficient two-shot capture using spectral multiplexing of the illumination that enables diffuse-specular separation of albedo and heuristic separation of photometric normals.","PeriodicalId":363391,"journal":{"name":"Eurographics Symposium on Rendering","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126809646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信