Computer Graphics Forum最新文献

筛选
英文 中文
Reconstructing Curves from Sparse Samples on Riemannian Manifolds 从稀疏样本重构黎曼曼曲面上的曲线
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-31 DOI: 10.1111/cgf.15136
D. Marin, F. Maggioli, S. Melzi, S. Ohrhallinger, M. Wimmer
{"title":"Reconstructing Curves from Sparse Samples on Riemannian Manifolds","authors":"D. Marin,&nbsp;F. Maggioli,&nbsp;S. Melzi,&nbsp;S. Ohrhallinger,&nbsp;M. Wimmer","doi":"10.1111/cgf.15136","DOIUrl":"10.1111/cgf.15136","url":null,"abstract":"<div>\u0000 \u0000 <p>Reconstructing 2D curves from sample points has long been a critical challenge in computer graphics, finding essential applications in vector graphics. The design and editing of curves on surfaces has only recently begun to receive attention, primarily relying on human assistance, and where not, limited by very strict sampling conditions. In this work, we formally improve on the state-of-the-art requirements and introduce an innovative algorithm capable of reconstructing closed curves directly on surfaces from a given sparse set of sample points. We extend and adapt a state-of-the-art planar curve reconstruction method to the realm of surfaces while dealing with the challenges arising from working on non-Euclidean domains. We demonstrate the robustness of our method by reconstructing multiple curves on various surface meshes. We explore novel potential applications of our approach, allowing for automated reconstruction of curves on Riemannian manifolds.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15136","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Dual-Volumes for Tetrahedral Meshes 四面体网格的优化双体积
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-31 DOI: 10.1111/cgf.15133
Alec Jacobson
{"title":"Optimized Dual-Volumes for Tetrahedral Meshes","authors":"Alec Jacobson","doi":"10.1111/cgf.15133","DOIUrl":"10.1111/cgf.15133","url":null,"abstract":"<div>\u0000 \u0000 <p>Constructing well-behaved Laplacian and mass matrices is essential for tetrahedral mesh processing. Unfortunately, the <i>de facto</i> standard linear finite elements exhibit bias on tetrahedralized regular grids, motivating the development of finite-volume methods. In this paper, we place existing methods into a common construction, showing how their differences amount to the choice of simplex centers. These choices lead to satisfaction or breakdown of important properties: continuity with respect to vertex positions, positive semi-definiteness of the implied Dirichlet energy, positivity of the mass matrix, and unbiased-ness on regular grids. Based on this analysis, we propose a new method for constructing dual-volumes which explicitly satisfy all of these properties via convex optimization.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15133","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coverage Axis++: Efficient Inner Point Selection for 3D Shape Skeletonization 覆盖轴++:三维形状骨架化的高效内点选择
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-31 DOI: 10.1111/cgf.15143
Zimeng Wang, Zhiyang Dou, Rui Xu, Cheng Lin, Yuan Liu, Xiaoxiao Long, Shiqing Xin, Taku Komura, Xiaoming Yuan, Wenping Wang
{"title":"Coverage Axis++: Efficient Inner Point Selection for 3D Shape Skeletonization","authors":"Zimeng Wang,&nbsp;Zhiyang Dou,&nbsp;Rui Xu,&nbsp;Cheng Lin,&nbsp;Yuan Liu,&nbsp;Xiaoxiao Long,&nbsp;Shiqing Xin,&nbsp;Taku Komura,&nbsp;Xiaoming Yuan,&nbsp;Wenping Wang","doi":"10.1111/cgf.15143","DOIUrl":"10.1111/cgf.15143","url":null,"abstract":"<div>\u0000 \u0000 <p>We introduce Coverage Axis++, a novel and efficient approach to 3D shape skeletonization. The current state-of-the-art approaches for this task often rely on the watertightness of the input [LWS*15; PWG*19; PWG*19] or suffer from substantial computational costs [DLX*22; CD23], thereby limiting their practicality. To address this challenge, Coverage Axis++ proposes a heuristic algorithm to select skeletal points, offering a high-accuracy approximation of the Medial Axis Transform (MAT) while significantly mitigating computational intensity for various shape representations. We introduce a simple yet effective strategy that considers shape coverage, uniformity, and centrality to derive skeletal points. The selection procedure enforces consistency with the shape structure while favoring the dominant medial balls, which thus introduces a compact underlying shape representation in terms of MAT. As a result, Coverage Axis++ allows for skeletonization for various shape representations (e.g., water-tight meshes, triangle soups, point clouds), specification of the number of skeletal points, few hyperparameters, and highly efficient computation with improved reconstruction accuracy. Extensive experiments across a wide range of 3D shapes validate the efficiency and effectiveness of Coverage Axis++. Our codes are available at https://github.com/Frank-ZY-Dou/Coverage_Axis.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
1-Lipschitz Neural Distance Fields 1-Lipschitz 神经距离场
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-31 DOI: 10.1111/cgf.15128
Guillaume Coiffier, Louis Béthune
{"title":"1-Lipschitz Neural Distance Fields","authors":"Guillaume Coiffier,&nbsp;Louis Béthune","doi":"10.1111/cgf.15128","DOIUrl":"10.1111/cgf.15128","url":null,"abstract":"<p>Neural implicit surfaces are a promising tool for geometry processing that represent a solid object as the zero level set of a neural network. Usually trained to approximate a signed distance function of the considered object, these methods exhibit great visual fidelity and quality near the surface, yet their properties tend to degrade with distance, making geometrical queries hard to perform without the help of complex range analysis techniques. Based on recent advancements in Lipschitz neural networks, we introduce a new method for approximating the signed distance function of a given object. As our neural function is made 1-Lipschitz by construction, it cannot overestimate the distance, which guarantees robustness even far from the surface. Moreover, the 1-Lipschitz constraint allows us to use a different loss function, called the <i>hinge-Kantorovitch-Rubinstein</i> loss, which pushes the gradient as close to unit-norm as possible, thus reducing computation costs in iterative queries. As this loss function only needs a rough estimate of occupancy to be optimized, this means that the true distance function need not to be known. We are therefore able to compute neural implicit representations of even bad quality geometry such as noisy point clouds or triangle soups. We demonstrate that our methods is able to approximate the distance function of any closed or open surfaces or curves in the plane or in space, while still allowing sphere tracing or closest point projections to be performed robustly.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 5","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation in Neural Style Transfer: A Review 神经风格传递中的评价:综述
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-30 DOI: 10.1111/cgf.15165
Eleftherios Ioannou, Steve Maddock
{"title":"Evaluation in Neural Style Transfer: A Review","authors":"Eleftherios Ioannou,&nbsp;Steve Maddock","doi":"10.1111/cgf.15165","DOIUrl":"10.1111/cgf.15165","url":null,"abstract":"<p>The field of neural style transfer (NST) has witnessed remarkable progress in the past few years, with approaches being able to synthesize artistic and photorealistic images and videos of exceptional quality. To evaluate such results, a diverse landscape of evaluation methods and metrics is used, including authors' opinions based on side-by-side comparisons, human evaluation studies that quantify the subjective judgements of participants, and a multitude of quantitative computational metrics which objectively assess the different aspects of an algorithm's performance. However, there is no consensus regarding the most suitable and effective evaluation procedure that can guarantee the reliability of the results. In this review, we provide an in-depth analysis of existing evaluation techniques, identify the inconsistencies and limitations of current evaluation methods, and give recommendations for standardized evaluation practices. We believe that the development of a robust evaluation framework will not only enable more meaningful and fairer comparisons among NST methods but will also enhance the comprehension and interpretation of research findings in the field.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15165","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141871392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front Matter 正文
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-25 DOI: 10.1111/cgf.15161
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.15161","DOIUrl":"10.1111/cgf.15161","url":null,"abstract":"<p>Imperial College London, South Kensington, London, UK</p><p><b>Program Co-Chairs</b></p><p>Elena Garces, Universidad Rey Juan Carlos, Spain / Adobe, France</p><p>Eric Haines, NVIDIA, US</p><p><b>Conference Chairs</b></p><p>Abhijeet Ghosh, Imperial College London, UK</p><p>Tobias Ritschel, University College London, UK</p><p>Laurent Belcour, Intel</p><p>Pierre Bénard, Bordeaux University, Inria Bordeaux-Sud-Ouest</p><p>Jiří Bittner, Czech Technical University in Prague</p><p>Tamy Boubekeur, Adobe Research</p><p>Per Christensen, Pixar</p><p>Petrik Clarberg, NVIDIA</p><p>Eugene d'Eon, NVIDIA</p><p>Daljit Singh Dhillon, Clemson University</p><p>George Drettakis, INRIA</p><p>Marc Droske, Wētā FX</p><p>Jonathan Dupuy, Intel</p><p>Farshad Einabadi, University of Surrey</p><p>Alban Fichet, Intel</p><p>Iliyan Georgiev, Adobe Research</p><p>Yotam Gingold, George Mason University</p><p>Pascal Grittman, Saarland University</p><p>Thorsten Grosch, TU Clausthal</p><p>Adrien Gruson, École de Technologie Supérieure</p><p>Tobias Günther, FAU Erlangen-Nuremberg</p><p>Milos Hasan, Adobe Research</p><p>Julian Iseringhausen, Google Research</p><p>Adrián Jarabo, Meta</p><p>Markus Kettunen, NVIDIA</p><p>Georgios Kopanas, Inria &amp; Université Côte d'Azur</p><p>Rafael Kuffner dos Anjos, University of Leeds</p><p>Manuel Lagunas, Amazon</p><p>Thomas Leimkühler, MPI Informatik</p><p>Hendrik Lensch, University of Tübingen</p><p>Gabor Liktor, Intel</p><p>Jorge Lopez-Moreno, Universidad Rey Juan Carlos</p><p>Daniel Meister, Advanced Micro Devices, Inc.</p><p>Xiaoxu Meng, Tencent</p><p>Quirin Meyer, Coburg University</p><p>Zahra Montazeri, University of Manchester</p><p>Bochang Moon, Gwangju Institute of Science and Technology</p><p>Krishna Mullia, Adobe Research</p><p>Jacob Munkberg, NVIDIA</p><p>Thu Nguyen-Phuoc, Meta</p><p>Merlin Nimier-David, NVIDIA</p><p>Christoph Peters, Intel</p><p>Matt Pharr, NVIDIA</p><p>Julien Philip, Adobe Research</p><p>Alexander Reshetov, NVIDIA</p><p>Tobias Rittig, Additive Appearance, Charles University</p><p>Fabrice Rousselle, NVIDIA</p><p>Marco Salvi, NVIDIA</p><p>Nicolas Savva, Autodesk, Inc.</p><p>Johannes Schudeiske (Hanika), KIT</p><p>Kai Selgrad, OTH Regensburg</p><p>Ari Silvennoinen, Activision</p><p>Gurprit Singh, MPI Informatik</p><p>Erik Sintorn, Chalmers University of Technology</p><p>Peter-Pike Sloan, Activision</p><p>Cara Tursun, Rijksuniversiteit Groningen</p><p>Karthik Vaidyanathan, NVIDIA</p><p>Konstantinos Vardis, Huawei Technologies</p><p>Delio Vicini, Google</p><p>Jiří Vorba, Weta Digital</p><p>Bruce Walter, Cornell University</p><p>Li-Yi Wei, Adobe Research</p><p>Hongzhi Wu, Zhejiang University</p><p>Zexiang Xu, Adobe Research</p><p>Kai Yan, University of California Irvine</p><p>Tizian Zeltner, NVIDIA</p><p>Shuang Zhao, University of California, Irvine</p><p>Artur Grigorev, ETH Zurich</p><p>\u0000 </p><p>\u0000 </p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":"i-x"},"PeriodicalIF":2.7,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Appearance Model for Cloth Rendering 用于布料渲染的神经外观模型
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-24 DOI: 10.1111/cgf.15156
G. Y. Soh, Z. Montazeri
{"title":"Neural Appearance Model for Cloth Rendering","authors":"G. Y. Soh,&nbsp;Z. Montazeri","doi":"10.1111/cgf.15156","DOIUrl":"10.1111/cgf.15156","url":null,"abstract":"<div>\u0000 <p>The realistic rendering of woven and knitted fabrics has posed significant challenges throughout many years. Previously, fiber-based micro-appearance models have achieved considerable success in attaining high levels of realism. However, rendering such models remains complex due to the intricate internal scatterings of hundreds of fibers within a yarn, requiring vast amounts of memory and time to render. In this paper, we introduce a new framework to capture aggregated appearance by tracing many light paths through the underlying fiber geometry. We then employ lightweight neural networks to accurately model the aggregated BSDF, which allows for the precise modeling of a diverse array of materials while offering substantial improvements in speed and reductions in memory. Furthermore, we introduce a novel importance sampling scheme to further speed up the rate of convergence. We validate the efficacy and versatility of our framework through comparisons with preceding fiber-based shading models as well as the most recent yarn-based model.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Rasterize Differentiably 学习不同的光栅化
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-24 DOI: 10.1111/cgf.15145
C. Wu, H. Mailee, Z. Montazeri, T. Ritschel
{"title":"Learning to Rasterize Differentiably","authors":"C. Wu,&nbsp;H. Mailee,&nbsp;Z. Montazeri,&nbsp;T. Ritschel","doi":"10.1111/cgf.15145","DOIUrl":"10.1111/cgf.15145","url":null,"abstract":"<p>Differentiable rasterization changes the standard formulation of primitive rasterization — by enabling gradient flow from a pixel to its underlying triangles — using distribution functions in different stages of rendering, creating a “soft” version of the original rasterizer. However, choosing the optimal softening function that ensures the best performance and convergence to a desired goal requires trial and error. Previous work has analyzed and compared several combinations of softening. In this work, we take it a step further and, instead of making a combinatorial choice of softening operations, parameterize the continuous space of common softening operations. We study meta-learning tunable softness functions over a set of inverse rendering tasks (2D and 3D shape, pose and occlusion) so it generalizes to new and unseen differentiable rendering tasks with optimal softness.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MatUp: Repurposing Image Upsamplers for SVBRDFs MatUp:为 SVBRDF 重用图像升维器
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-24 DOI: 10.1111/cgf.15151
A. Gauthier, B. Kerbl, J. Levallois, R. Faury, J. M. Thiery, T. Boubekeur
{"title":"MatUp: Repurposing Image Upsamplers for SVBRDFs","authors":"A. Gauthier,&nbsp;B. Kerbl,&nbsp;J. Levallois,&nbsp;R. Faury,&nbsp;J. M. Thiery,&nbsp;T. Boubekeur","doi":"10.1111/cgf.15151","DOIUrl":"10.1111/cgf.15151","url":null,"abstract":"<p>We propose M<span>at</span>U<span>p</span>, an upsampling filter for material super-resolution. Our method takes as input a low-resolution SVBRDF and upscales its maps so that their rendering under various lighting conditions fits upsampled renderings inferred in the radiance domain with pre-trained RGB upsamplers. We formulate our local filter as a compact Multilayer Perceptron (MLP), which acts on a small window of the input SVBRDF and is optimized using a data-fitting loss defined over upsampled radiance at various locations. This optimization is entirely performed at the scale of a single, independent material. Doing so, M<span>at</span>U<span>p</span> leverages the reconstruction capabilities acquired over large collections of natural images by pre-trained RGB models and provides regularization over self-similar structures. In particular, our light-weight neural filter avoids retraining complex architectures from scratch or accessing any large collection of low/high resolution material pairs – which do not actually exist at the scale RGB upsamplers are trained with. As a result, M<span>at</span>U<span>p</span> provides fine and coherent details in the upscaled material maps, as shown in the extensive evaluation we provide.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lossless Basis Expansion for Gradient-Domain Rendering 梯度域渲染的无损基础扩展
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-07-24 DOI: 10.1111/cgf.15153
Q. Fang, T. Hachisuka
{"title":"Lossless Basis Expansion for Gradient-Domain Rendering","authors":"Q. Fang,&nbsp;T. Hachisuka","doi":"10.1111/cgf.15153","DOIUrl":"10.1111/cgf.15153","url":null,"abstract":"<p>Gradient-domain rendering utilizes difference estimates with shift mapping to reduce variance in Monte Carlo rendering. Such difference estimates are effective under the assumption that pixels for difference estimates have similar integrands. This assumption is often violated because it is common to have spatially varying BSDFs with material maps, which potentially result in a very different integrand per pixel. We introduce an extension of gradient-domain rendering that effectively supports such per-pixel variation in BSDFs based on basis expansion. Basis expansion for BSDFs has been used extensively in other problems in rendering, where the goal is to approximate a given BSDF by a weighted sum of predefined basis functions. We instead utilize lossless basis expansion, representing a BSDF without any approximation by adding the remaining difference in the original basis expansion. This lossless basis expansion allows us to cancel more terms via shift mapping, resulting in low variance difference estimates even with per-pixel BSDF variation. We also extend the Poisson reconstruction process to support this basis expansion. Regular gradient-domain rendering can be expressed as a special case of our extension, where the basis is simply the BSDF per pixel (i.e., no basis expansion). We provide proof-of-concept experiments and showcase the effectiveness of our method for scenes with highly varying material maps. Our results show noticeable improvement over regular gradient-domain rendering under both L<sup>1</sup> and L<sup>2</sup> reconstructions. The resulting formulation via basis expansion essentially serves as a new way of path reuse among pixels in the presence of per-pixel variation.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信