Computer Graphics Forum最新文献

筛选
英文 中文
Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization 艺术家- inator:基于文本的,有光泽的非真实感风格化
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70182
J. Daniel Subias, Saul Daniel-Soriano, Diego Gutierrez, Ana Serrano
{"title":"Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization","authors":"J. Daniel Subias,&nbsp;Saul Daniel-Soriano,&nbsp;Diego Gutierrez,&nbsp;Ana Serrano","doi":"10.1111/cgf.70182","DOIUrl":"https://doi.org/10.1111/cgf.70182","url":null,"abstract":"<div>\u0000 <p>Large diffusion models have made a remarkable leap synthesizing high-quality artistic images from text descriptions. However, these powerful pre-trained models still lack control to guide key material appearance properties, such as gloss. In this work, we present a threefold contribution: (1) we analyze how gloss is perceived across different artistic styles (i.e., oil painting, watercolor, ink pen, charcoal, and soft crayon); (2) we leverage our findings to create a dataset with 1,336,272 stylized images of many different geometries in all five styles, including automatically-computed text descriptions of their appearance (e.g., “A glossy bunny hand painted with an orange soft crayon”); and (3) we train ControlNet to condition Stable Diffusion XL synthesizing novel painterly depictions of new objects, using simple inputs such as edge maps, hand-drawn sketches, or clip arts. Compared to previous approaches, our framework yields more accurate results despite the simplified input, as we show both quantitative and qualitatively.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70182","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StructuReiser: A Structure-preserving Video Stylization Method StructuReiser:一种保留结构的视频样式化方法
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70161
R. Spetlik, D. Futschik, D. Sýkora
{"title":"StructuReiser: A Structure-preserving Video Stylization Method","authors":"R. Spetlik,&nbsp;D. Futschik,&nbsp;D. Sýkora","doi":"10.1111/cgf.70161","DOIUrl":"https://doi.org/10.1111/cgf.70161","url":null,"abstract":"<div>\u0000 <p>We introduce StructuReiser, a novel video-to-video translation method that transforms input videos into stylized sequences using a set of user-provided keyframes. Unlike most existing methods, StructuReiser strictly adheres to the structural elements of the target video, preserving the original identity while seamlessly applying the desired stylistic transformations. This provides a level of control and consistency that is challenging to achieve with text-driven or keyframe-based approaches, including large video models. Furthermore, StructuReiser supports real-time inference on standard graphics hardware as well as custom keyframe editing, enabling interactive applications and expanding possibilities for creative expression and video manipulation.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance Sampling of the Micrograin Visible NDF 微颗粒可见NDF的重要采样
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70174
S. Lucas, R. Pacanowski, P. Barla
{"title":"Importance Sampling of the Micrograin Visible NDF","authors":"S. Lucas,&nbsp;R. Pacanowski,&nbsp;P. Barla","doi":"10.1111/cgf.70174","DOIUrl":"https://doi.org/10.1111/cgf.70174","url":null,"abstract":"<div>\u0000 <p>Importance sampling of visible normal distribution functions (vNDF) is a required ingredient for the efficient rendering of microfacet-based materials. In this paper, we explain how to sample the vNDF for the micrograin material model [LRPB23], which has been recently improved to handle height-normal correlations through a new Geometric Attenuation Factor (GAF) [LRPB24], leading to a stronger impact on appearance compared to the earlier Smith approximation. To this end, we make two contributions: we derive analytic expressions for the marginal and conditional cumulative distribution functions (CDFs) of the vNDF; we provide efficient methods for inverting these CDFs based respectively on a 2D lookup table and on the triangle-cut method [Hei20].</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70174","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural field multi-view shape-from-polarisation 神经场多视图形状-从极化
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70177
R. Wanaset, G. C. Guarnera, W. A. P. Smith
{"title":"Neural field multi-view shape-from-polarisation","authors":"R. Wanaset,&nbsp;G. C. Guarnera,&nbsp;W. A. P. Smith","doi":"10.1111/cgf.70177","DOIUrl":"https://doi.org/10.1111/cgf.70177","url":null,"abstract":"<p>We tackle the problem of multi-view shape-from-polarisation using a neural implicit surface representation and volume rendering of a polarised neural radiance field (P-NeRF). The P-NeRF predicts the parameters of a mixed diffuse/specular polarisation model. This directly relates polarisation behaviour to the surface normal without explicitly modelling illumination or BRDF. Via the implicit surface representation, this allows polarisation to directly inform the estimated geometry. This improves shape estimation and also allows separation of diffuse and specular radiance. For polarimetric images from division-of-focal-plane sensors, we fit directly to the raw data without first demosaicing. This avoids fitting to demosaicing artefacts and we propose losses and saturation masking specifically to handle HDR measurements. Our method achieves state-of-the-art performance on the PANDORA benchmark. We apply our method in a lightstage setting, providing single-shot face capture.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controllable Biophysical Human Faces 可控生物物理人脸
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70170
Minghao Liu, Stephane Grabli, Sébastien Speierer, Nikolaos Sarafianos, Lukas Bode, Matt Chiang, Christophe Hery, James Davis, Carlos Aliaga
{"title":"Controllable Biophysical Human Faces","authors":"Minghao Liu,&nbsp;Stephane Grabli,&nbsp;Sébastien Speierer,&nbsp;Nikolaos Sarafianos,&nbsp;Lukas Bode,&nbsp;Matt Chiang,&nbsp;Christophe Hery,&nbsp;James Davis,&nbsp;Carlos Aliaga","doi":"10.1111/cgf.70170","DOIUrl":"https://doi.org/10.1111/cgf.70170","url":null,"abstract":"<p>We present a novel generative model that synthesizes photorealistic, biophysically plausible faces by capturing the intricate relationships between facial geometry and biophysical attributes. Our approach models facial appearance in a biophysically grounded manner, allowing for the editing of both high-level attributes such as age and gender, as well as low-level biophysical properties such as melanin level and blood content. This enables continuous modeling of physical skin properties that correlate changes in skin properties with shape changes. We showcase the capabilities of our framework beyond its role as a generative model through two practical applications: editing the texture maps of 3D faces that have already been captured, and serving as a strong prior for face reconstruction when combined with differentiable rendering. Our model allows for the creation of physically-based relightable, editable faces with consistent topology and uv layout that can be integrated into traditional computer graphics pipelines.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiview Geometric Regularization of Gaussian Splatting for Accurate Radiance Fields 精确辐射场高斯散射的多视点几何正则化
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70179
Jungeon Kim, Geonsoo Park, Seungyong Lee
{"title":"Multiview Geometric Regularization of Gaussian Splatting for Accurate Radiance Fields","authors":"Jungeon Kim,&nbsp;Geonsoo Park,&nbsp;Seungyong Lee","doi":"10.1111/cgf.70179","DOIUrl":"https://doi.org/10.1111/cgf.70179","url":null,"abstract":"<p>Recent methods, such as 2D Gaussian Splatting and Gaussian Opacity Fields, have aimed to address the geometric inaccuracies of 3D Gaussian Splatting while retaining its superior rendering quality. However, these approaches still struggle to reconstruct smooth and reliable geometry, particularly in scenes with significant color variation across viewpoints, due to their per-point appearance modeling and single-view optimization constraints. In this paper, we propose an effective multiview geometric regularization strategy that integrates multiview stereo (MVS) depth, RGB, and normal constraints into Gaussian Splatting initialization and optimization. Our key insight is the complementary relationship between MVS-derived depth points and Gaussian Splatting-optimized positions: MVS robustly estimates geometry in regions of high color variation through local patch-based matching and epipolar constraints, whereas Gaussian Splatting provides more reliable and less noisy depth estimates near object boundaries and regions with lower color variation. To leverage this insight, we introduce a median depth-based multiview relative depth loss with uncertainty estimation, effectively integrating MVS depth information into Gaussian Splatting optimization. We also propose an MVS-guided Gaussian Splatting initialization to avoid Gaussians falling into suboptimal positions. Extensive experiments validate that our approach successfully combines these strengths, enhancing both geometric accuracy and rendering quality across diverse indoor and outdoor scenes.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differentiable Search Based Halftoning 基于可微搜索的半调
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70173
E. Luci, K. T. Wijaya, V. Babaei
{"title":"Differentiable Search Based Halftoning","authors":"E. Luci,&nbsp;K. T. Wijaya,&nbsp;V. Babaei","doi":"10.1111/cgf.70173","DOIUrl":"https://doi.org/10.1111/cgf.70173","url":null,"abstract":"<div>\u0000 <p>Halftoning is fundamental to image reproduction on devices with a limited set of output levels, such as printers. Halftoning algorithms reproduce continuous-tone images by distributing dots with a fixed tone but variable size or spacing. Search-based approaches optimize for a dot distribution that minimizes a given visual loss function w.r.t. an input image. This class of methods is not only the most intuitive and versatile but can also yield the highest quality results depending on the merit of the employed loss function. However, their combinatorial nature makes them computationally inefficient. We introduce the first differentiable search-based halftoning algorithm. Our proposed method can be natively used to perform multi-color, multi-level halftoning. Our main insight lies in introducing a relaxation in the discrete choice of dot assignment during the backward pass of the optimization. We achieve this by associating a fictitious distance from the image plane to each dot, embedding the problem in three dimensions. We also introduce a novel loss component that operates in the frequency domain and provides a better visual loss when combined with existing image similarity metrics. We validate our approach by demonstrating that it outperforms stochastic optimization methods in both speed and objective value, while also scaling significantly better to large images. The code is available at https:gitlab.mpi-klsb.mpg.de/aidam-public/differentiable-halftoning</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70173","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Image-based Lighting of Glints 基于实时图像的闪烁照明
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70175
Tom Kneiphof, Reinhard Klein
{"title":"Real-Time Image-based Lighting of Glints","authors":"Tom Kneiphof,&nbsp;Reinhard Klein","doi":"10.1111/cgf.70175","DOIUrl":"https://doi.org/10.1111/cgf.70175","url":null,"abstract":"<div>\u0000 <p>Image-based lighting is a widely used technique to reproduce shading under real-world lighting conditions, especially in real-time rendering applications. A particularly challenging scenario involves materials exhibiting a sparkling or glittering appearance, caused by discrete microfacets scattered across their surface. In this paper, we propose an efficient approximation for image-based lighting of glints, enabling fully dynamic material properties and environment maps. Our novel approach is grounded in real-time glint rendering under area light illumination and employs standard environment map filtering techniques. Crucially, our environment map filtering process is sufficiently fast to be executed on a per-frame basis. Our method assumes that the environment map is partitioned into few homogeneous regions of constant radiance. By filtering the corresponding indicator functions with the normal distribution function, we obtain the probabilities for individual microfacets to reflect light from each region. During shading, these probabilities are utilized to hierarchically sample a multinomial distribution, facilitated by our novel dual-gated Gaussian approximation of binomial distributions. We validate that our real-time approximation is close to ground-truth renderings for a range of material properties and lighting conditions, and demonstrate robust and stable performance, with little overhead over rendering glints from a single directional light. Compared to rendering smooth materials without glints, our approach requires twice as much memory to store the prefiltered environment map.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous-Line Image Stylization Based on Hilbert Curve 基于希尔伯特曲线的连续线图像样式化
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70169
Zhifang Tong, Bolei Zuov, Xiaoxia Yang, Shengjun Liu, Xinru Liu
{"title":"Continuous-Line Image Stylization Based on Hilbert Curve","authors":"Zhifang Tong,&nbsp;Bolei Zuov,&nbsp;Xiaoxia Yang,&nbsp;Shengjun Liu,&nbsp;Xinru Liu","doi":"10.1111/cgf.70169","DOIUrl":"https://doi.org/10.1111/cgf.70169","url":null,"abstract":"<p>Horizontal and vertical lines hold significant aesthetic and psychological importance, providing a sense of order, stability, and security. This paper presents an image stylization method that quickly generates non-self-intersecting and regular continuous lines based on the Hilbert curve, a well-known space-filling curve consisting of only horizontal and vertical segments. We first calculate the grayscale threshold based on gray quantization for the original image and recursively subdivide the cells according to the density in each cell. To avoid generating new feature curves due to limited gray quantization, a recursive subdivision with probability is designed to smooth the density. Then, we utilize the rule of Hilbert curve to generate continuous lines connecting all the cells. Between different degrees of Hilbert curves, bridge curves composed of horizontal and vertical lines are constructed, which are also intersection-free, instead of a straight line linking them directly. There are two parameters provided for feasibly adjusting variate effects. The image stylization framework could be generalized to other space-filling curves like the Peano curve. Compared to existing methods, our approach can generate pleasing results quickly and is fully automated. Many results show our method is robust and effective.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Fidelity Texture Transfer Using Multi-Scale Depth-Aware Diffusion 使用多尺度深度感知扩散的高保真纹理传输
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70172
Rongzhen Lin, Zichong Chen, Xiaoyong Hao, Yang Zhou, Hui Huang
{"title":"High-Fidelity Texture Transfer Using Multi-Scale Depth-Aware Diffusion","authors":"Rongzhen Lin,&nbsp;Zichong Chen,&nbsp;Xiaoyong Hao,&nbsp;Yang Zhou,&nbsp;Hui Huang","doi":"10.1111/cgf.70172","DOIUrl":"https://doi.org/10.1111/cgf.70172","url":null,"abstract":"<p>Textures are a key component of 3D assets. Transferring textures from one shape to another, without user interaction or additional semantic guidance, is a classical yet challenging problem. It can enhance the diversity of existing shape collections, augmenting their application scope. This paper proposes an innovative 3D texture transfer framework that leverages the generative power of pre-trained diffusion models. While diffusion models have achieved significant success in 2D image generation, their application to 3D domains faces great challenges in preserving coherence across different viewpoints. Addressing this issue, we designed a multi-scale generation framework to optimize the UV maps coarse-to-fine. To ensure multi-view consistency, we use depth info as geometric guidance; meanwhile, a novel consistency loss is proposed to further constrain the color coherence and reduce artifacts. Experimental results demonstrate that our multi-scale framework not only produces high-quality texture transfer results but also excels in handling complex shapes while preserving correct semantic correspondences. Compared to existing techniques, our method achieves improvements in both consistency and texture clarity, as well as time efficiency.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信