I. Lopes, V. Deschaintre, Y. Hold-Geoffroy, R. de Charette
{"title":"MatSwap: Light-aware material transfers in images","authors":"I. Lopes, V. Deschaintre, Y. Hold-Geoffroy, R. de Charette","doi":"10.1111/cgf.70168","DOIUrl":"https://doi.org/10.1111/cgf.70168","url":null,"abstract":"<p>We present MatSwap, a method to transfer materials to designated surfaces in an image realistically. Such a task is non-trivial due to the large entanglement of material appearance, geometry, and lighting in a photograph. In the literature, material editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge and 3D scene properties that are impractical to obtain. In contrast, we propose to directly learn the relationship between the input material—as observed on a flat surface—and its appearance within the scene, without the need for explicit UV mapping. To achieve this, we rely on a custom light- and geometry-aware diffusion model. We fine-tune a large-scale pre-trained text-to-image model for material transfer using our synthetic dataset, preserving its strong priors to ensure effective generalization to real images. As a result, our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene. MatSwap is evaluated on synthetic and real images showing that it compares favorably to recent works. Our code and data are made publicly available on https://github.com/astra-vision/MatSwap</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Munkberg, Z. Wang, R. Liang, T. Shen, J. Hasselgren
{"title":"VideoMat: Extracting PBR Materials from Video Diffusion Models","authors":"J. Munkberg, Z. Wang, R. Liang, T. Shen, J. Hasselgren","doi":"10.1111/cgf.70180","DOIUrl":"https://doi.org/10.1111/cgf.70180","url":null,"abstract":"<p>We leverage finetuned video diffusion models, intrinsic decomposition of videos, and physically-based differentiable rendering to generate high quality materials for 3D models given a text prompt or a single image. We condition a video diffusion model to respect the input geometry and lighting condition. This model produces multiple views of a given 3D model with coherent material properties. Secondly, we use a recent model to extract intrinsics (base color, roughness, metallic) from the generated video. Finally, we use the intrinsics alongside the generated video in a differentiable path tracer to robustly extract PBR materials directly compatible with common content creation tools.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Huang, J. Yuan, R. Hu, L. Wang, Y. Guo, B. Chen, J. Guo, J. Zhu
{"title":"Detail-Preserving Real-Time Hair Strand Linking and Filtering","authors":"T. Huang, J. Yuan, R. Hu, L. Wang, Y. Guo, B. Chen, J. Guo, J. Zhu","doi":"10.1111/cgf.70176","DOIUrl":"https://doi.org/10.1111/cgf.70176","url":null,"abstract":"<p>Realistic hair rendering remains a significant challenge in computer graphics due to the intricate microstructure of hair fibers and their anisotropic scattering properties, which make them highly sensitive to noise. Although recent advancements in image-space and 3D-space denoising and antialiasing techniques have facilitated real-time rendering in simple scenes, existing methods still struggle with excessive blurring and artifacts, particularly in fine hair details such as flyaway strands. These issues arise because current techniques often fail to preserve sub-pixel continuity and lack directional sensitivity in the filtering process. To address these limitations, we introduce a novel real-time hair filtering technique that effectively reconstructs fine fiber details while suppressing noise. Our method improves visual quality by maintaining strand-level details and ensuring computational efficiency, making it well-suited for real-time applications in video games and virtual reality (VR) and augmented reality (AR) environments.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Li, F. Hahlbohm, T. Scholz, M. Eisemann, J.P. Tauscher, M. Magnor
{"title":"SPaGS: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas","authors":"J. Li, F. Hahlbohm, T. Scholz, M. Eisemann, J.P. Tauscher, M. Magnor","doi":"10.1111/cgf.70171","DOIUrl":"https://doi.org/10.1111/cgf.70171","url":null,"abstract":"<div>\u0000 <p>In this paper we propose SPaGS, a high-quality, real-time free-viewpoint rendering approach from 360-degree panoramic images. While existing methods building on Neural Radiance Fields or 3D Gaussian Splatting have difficulties to achieve real-time frame rates and high-quality results at the same time, SPaGS combines the advantages of an explicit 3D Gaussian-based scene representation and ray casting-based rendering to attain fast and accurate results. Central to our new approach is the exact calculation of axis-aligned bounding boxes for spherical images that significantly accelerates omnidirectional ray casting of 3D Gaussians. We also present a new dataset consisting of ten real-world scenes recorded with a drone that incorporates both calibrated 360-degree panoramic images as well as perspective images captured simultaneously, i.e., with the same flight trajectory. Our evaluation on this new dataset as well as established benchmarks demonstrates that SPaGS excels over state-of-the-art methods in terms of both rendering quality and speed.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70171","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DiffNEG: A Differentiable Rasterization Framework for Online Aiming Optimization in Solar Power Tower Systems","authors":"Cangping Zheng, Xiaoxia Lin, Dongshuai Li, Yuhong Zhao, Jieqing Feng","doi":"10.1111/cgf.70166","DOIUrl":"https://doi.org/10.1111/cgf.70166","url":null,"abstract":"<p>Inverse rendering aims to infer scene parameters from observed images. In Solar Power Tower (SPT) systems, this corresponds to an aiming optimization problem—adjusting heliostats' orientations to shape the radiative flux density distribution (RFDD) on the receiver to conform to a desired distribution. The SPT system is widely favored in the field of renewable energy, where aiming optimization is crucial for ensuring its thermal efficiency and safety. However, traditional aiming optimization methods are inefficient and fail to meet online demands. In this paper, a novel optimization approach, DiffNEG, is proposed. DiffNEG introduces a differentiable rasterization method to model the reflected radiative flux of each heliostat as an elliptical Gaussian distribution. It leverages data-driven techniques to enhance simulation accuracy and employs automatic differentiation combined with gradient descent to achieve online, gradient-guided optimization in a continuous solution space. Experiments on a real large-scale heliostat field with nearly 30,000 heliostats demonstrate that DiffNEG can optimize within 10 seconds, improving efficiency by one order of magnitude compared to the latest DiffMCRT method and by three orders of magnitude compared to traditional heuristic methods, while also exhibiting superior robustness under both steady and transient state.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Behnaz Kavoosighafi, Rafał K. Mantiuk, Saghi Hajisharif, Ehsan Miandji, Jonas Unger
{"title":"Perceived quality of BRDF models","authors":"Behnaz Kavoosighafi, Rafał K. Mantiuk, Saghi Hajisharif, Ehsan Miandji, Jonas Unger","doi":"10.1111/cgf.70162","DOIUrl":"https://doi.org/10.1111/cgf.70162","url":null,"abstract":"<div>\u0000 <p>Material appearance is commonly modeled with the Bidirectional Reflectance Distribution Functions (BRDFs), which need to trade accuracy for complexity and storage cost. To investigate the current practices of BRDF modeling, we collect the first high dynamic range stereoscopic video dataset that captures the perceived quality degradation with respect to a number of parametric and non-parametric BRDF models. Our dataset shows that the current loss functions used to fit BRDF models, such as mean-squared error of logarithmic reflectance values, correlate poorly with the perceived quality of materials in rendered videos. We further show that quality metrics that compare rendered material samples give a significantly higher correlation with subjective quality judgments, and a simple Euclidean distance in the ITP color space (ΔE<sub>ITP</sub>) shows the highest correlation. Additionally, we investigate the use of different BRDF-space metrics as loss functions for fitting BRDF models and find that logarithmic mapping is the most effective approach for BRDF-space loss functions.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time Level-of-detail Strand-based Rendering","authors":"T. Huang, Y. Zhou, D. Lin, J. Zhu, L. Yan, K. Wu","doi":"10.1111/cgf.70181","DOIUrl":"https://doi.org/10.1111/cgf.70181","url":null,"abstract":"<p>We present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a <i>13×</i> speedup.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Daniel Subias, Saul Daniel-Soriano, Diego Gutierrez, Ana Serrano
{"title":"Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization","authors":"J. Daniel Subias, Saul Daniel-Soriano, Diego Gutierrez, Ana Serrano","doi":"10.1111/cgf.70182","DOIUrl":"https://doi.org/10.1111/cgf.70182","url":null,"abstract":"<div>\u0000 <p>Large diffusion models have made a remarkable leap synthesizing high-quality artistic images from text descriptions. However, these powerful pre-trained models still lack control to guide key material appearance properties, such as gloss. In this work, we present a threefold contribution: (1) we analyze how gloss is perceived across different artistic styles (i.e., oil painting, watercolor, ink pen, charcoal, and soft crayon); (2) we leverage our findings to create a dataset with 1,336,272 stylized images of many different geometries in all five styles, including automatically-computed text descriptions of their appearance (e.g., “A glossy bunny hand painted with an orange soft crayon”); and (3) we train ControlNet to condition Stable Diffusion XL synthesizing novel painterly depictions of new objects, using simple inputs such as edge maps, hand-drawn sketches, or clip arts. Compared to previous approaches, our framework yields more accurate results despite the simplified input, as we show both quantitative and qualitatively.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70182","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"StructuReiser: A Structure-preserving Video Stylization Method","authors":"R. Spetlik, D. Futschik, D. Sýkora","doi":"10.1111/cgf.70161","DOIUrl":"https://doi.org/10.1111/cgf.70161","url":null,"abstract":"<div>\u0000 <p>We introduce StructuReiser, a novel video-to-video translation method that transforms input videos into stylized sequences using a set of user-provided keyframes. Unlike most existing methods, StructuReiser strictly adheres to the structural elements of the target video, preserving the original identity while seamlessly applying the desired stylistic transformations. This provides a level of control and consistency that is challenging to achieve with text-driven or keyframe-based approaches, including large video models. Furthermore, StructuReiser supports real-time inference on standard graphics hardware as well as custom keyframe editing, enabling interactive applications and expanding possibilities for creative expression and video manipulation.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}