Computer Graphics Forum最新文献

筛选
英文 中文
Front Matter 前页
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-08-04 DOI: 10.1111/cgf.70165
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.70165","DOIUrl":"https://doi.org/10.1111/cgf.70165","url":null,"abstract":"<p>Copenhagen, Denmark</p><p>Beibei Wang - Nanjing University</p><p>Alexander Wilkie - Charles University</p><p><b>Conference Chair</b></p><p>Laurent Belcour - Intel Coporation</p><p>Jiří Bittner - Czech Technical University in Prague</p><p>Tamy Boubekeur - Adobe Research</p><p>Matt Jen-Yuan Chiang - Meta Reality Labs Research</p><p>Valentin Deschaintre - Adobe Research</p><p>Jean-Michel Dischler - ICUBE - Université de Strasbourg</p><p>George Drettakis - INRIA, Université Côte d'Azur</p><p>Farshad Einabadi - University of Surrey</p><p>Arthur Firmino - KeyShot</p><p>Elena Garces - Adobe</p><p>Iliyan Georgiev - Adobe Research</p><p>Abhijeet Ghosh - Imperial College London</p><p>Yotam Gingold - George Mason University</p><p>Pascal Grittmann - Saarland University</p><p>Thorsten Grosch - TU Clausthal</p><p>Adrien Gruson - École de Technologie Supérieure</p><p>Jie Guo - Nanjing University</p><p>Toshiya Hachisuka - University of Waterloo</p><p>David Hahn - TU Wien</p><p>Johannes Hanika - Karlsruhe Institute of Technology</p><p>Milos Hasan - Adobe Research</p><p>Sebastian Herholz - Intel Corporation</p><p>Nicolas Holzschuch - INRIA</p><p>Tomáš Iser - Charles University</p><p>Julian Iseringhausen - Google Research</p><p>Wojciech Jarosz - Dartmouth College</p><p>Alisa Jung - IVD / Karlsruhe Institute of Technology</p><p>Markus Kettunen - NVIDIA</p><p>Manuel Lagunas - Amazon</p><p>Sungkil Lee - Sungkyunkwan University</p><p>Tzu-Mao Li - UC San Diego</p><p>Daqi Lin - NVIDIA</p><p>Jorge Lopez-Moreno - Universidad Rey Juan Carlos</p><p>Steve Marschner - Cornell University</p><p>Daniel Martin - Universidad de Zaragoza</p><p>Bochang Moon - Gwangju Institute of Science and Technology</p><p>Krishna Mullia - Adobe Research</p><p>Jacob Munkberg - NVIDIA Corporation</p><p>Merlin Nimier-David - NVIDIA</p><p>Emilie Nogue - Imperial College London</p><p>Jan Novak - NVIDIA</p><p>Pieter Peers - College of William & Mary</p><p>Christoph Peters - TU Delft</p><p>Matt Pharr - NVIDIA</p><p>Julien Philip - Netflix Eyeline Studios</p><p>Alina Pranovich - Technical University of Denmark</p><p>Marco Salvi - NVIDIA</p><p>Nicolas Savva - Cornell University</p><p>Gurprit Singh - Max-Planck Institute for Informatics, Saarbrücken</p><p>Shlomi Steinberg - University of California Santa Barbara</p><p>Daniel Sýkora - CTU in Prague, FEE</p><p>Natalya Tatarchuk - Activision / Microsoft</p><p>Konstantinos Vardis - Huawei Technologies</p><p>Delio Vicini - Google</p><p>Jiří Vorba - Weta Digital</p><p>Rui Wang - Zhejiang University</p><p>Li-Yi Wei - Adobe Research</p><p>Tien-Tsin Wong - Monash University</p><p>Hongzhi Wu - Zhejiang University</p><p>KUI Wu - LightSpeed Studios</p><p>Lifan Wu - NVIDIA</p><p>Mengqi Xia - Yale University</p><p>Kun Xu - Tsinghua University</p><p>Kai Yan - University of California Irvine</p><p>Ling-Qi Yan - UC Santa Barbara</p><p>Huo Yuchi - Zhejiang University</p><p>Cem Yuksel - University of Utah</p><p>Tizian Zeltner - NVIDIA</p><p>Shuang Zhao - University of ","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":"i-x"},"PeriodicalIF":2.9,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70165","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MatSwap: Light-aware material transfers in images MatSwap:在图像中传输光感知材料
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70168
I. Lopes, V. Deschaintre, Y. Hold-Geoffroy, R. de Charette
{"title":"MatSwap: Light-aware material transfers in images","authors":"I. Lopes,&nbsp;V. Deschaintre,&nbsp;Y. Hold-Geoffroy,&nbsp;R. de Charette","doi":"10.1111/cgf.70168","DOIUrl":"https://doi.org/10.1111/cgf.70168","url":null,"abstract":"<p>We present MatSwap, a method to transfer materials to designated surfaces in an image realistically. Such a task is non-trivial due to the large entanglement of material appearance, geometry, and lighting in a photograph. In the literature, material editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge and 3D scene properties that are impractical to obtain. In contrast, we propose to directly learn the relationship between the input material—as observed on a flat surface—and its appearance within the scene, without the need for explicit UV mapping. To achieve this, we rely on a custom light- and geometry-aware diffusion model. We fine-tune a large-scale pre-trained text-to-image model for material transfer using our synthetic dataset, preserving its strong priors to ensure effective generalization to real images. As a result, our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene. MatSwap is evaluated on synthetic and real images showing that it compares favorably to recent works. Our code and data are made publicly available on https://github.com/astra-vision/MatSwap</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VideoMat: Extracting PBR Materials from Video Diffusion Models VideoMat:从视频扩散模型中提取PBR材料
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70180
J. Munkberg, Z. Wang, R. Liang, T. Shen, J. Hasselgren
{"title":"VideoMat: Extracting PBR Materials from Video Diffusion Models","authors":"J. Munkberg,&nbsp;Z. Wang,&nbsp;R. Liang,&nbsp;T. Shen,&nbsp;J. Hasselgren","doi":"10.1111/cgf.70180","DOIUrl":"https://doi.org/10.1111/cgf.70180","url":null,"abstract":"<p>We leverage finetuned video diffusion models, intrinsic decomposition of videos, and physically-based differentiable rendering to generate high quality materials for 3D models given a text prompt or a single image. We condition a video diffusion model to respect the input geometry and lighting condition. This model produces multiple views of a given 3D model with coherent material properties. Secondly, we use a recent model to extract intrinsics (base color, roughness, metallic) from the generated video. Finally, we use the intrinsics alongside the generated video in a differentiable path tracer to robustly extract PBR materials directly compatible with common content creation tools.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detail-Preserving Real-Time Hair Strand Linking and Filtering 细节保存实时头发链连接和过滤
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70176
T. Huang, J. Yuan, R. Hu, L. Wang, Y. Guo, B. Chen, J. Guo, J. Zhu
{"title":"Detail-Preserving Real-Time Hair Strand Linking and Filtering","authors":"T. Huang,&nbsp;J. Yuan,&nbsp;R. Hu,&nbsp;L. Wang,&nbsp;Y. Guo,&nbsp;B. Chen,&nbsp;J. Guo,&nbsp;J. Zhu","doi":"10.1111/cgf.70176","DOIUrl":"https://doi.org/10.1111/cgf.70176","url":null,"abstract":"<p>Realistic hair rendering remains a significant challenge in computer graphics due to the intricate microstructure of hair fibers and their anisotropic scattering properties, which make them highly sensitive to noise. Although recent advancements in image-space and 3D-space denoising and antialiasing techniques have facilitated real-time rendering in simple scenes, existing methods still struggle with excessive blurring and artifacts, particularly in fine hair details such as flyaway strands. These issues arise because current techniques often fail to preserve sub-pixel continuity and lack directional sensitivity in the filtering process. To address these limitations, we introduce a novel real-time hair filtering technique that effectively reconstructs fine fiber details while suppressing noise. Our method improves visual quality by maintaining strand-level details and ensuring computational efficiency, making it well-suited for real-time applications in video games and virtual reality (VR) and augmented reality (AR) environments.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPaGS: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas SPaGS:快速和准确的三维高斯溅射球面全景
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70171
J. Li, F. Hahlbohm, T. Scholz, M. Eisemann, J.P. Tauscher, M. Magnor
{"title":"SPaGS: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas","authors":"J. Li,&nbsp;F. Hahlbohm,&nbsp;T. Scholz,&nbsp;M. Eisemann,&nbsp;J.P. Tauscher,&nbsp;M. Magnor","doi":"10.1111/cgf.70171","DOIUrl":"https://doi.org/10.1111/cgf.70171","url":null,"abstract":"<div>\u0000 <p>In this paper we propose SPaGS, a high-quality, real-time free-viewpoint rendering approach from 360-degree panoramic images. While existing methods building on Neural Radiance Fields or 3D Gaussian Splatting have difficulties to achieve real-time frame rates and high-quality results at the same time, SPaGS combines the advantages of an explicit 3D Gaussian-based scene representation and ray casting-based rendering to attain fast and accurate results. Central to our new approach is the exact calculation of axis-aligned bounding boxes for spherical images that significantly accelerates omnidirectional ray casting of 3D Gaussians. We also present a new dataset consisting of ten real-world scenes recorded with a drone that incorporates both calibrated 360-degree panoramic images as well as perspective images captured simultaneously, i.e., with the same flight trajectory. Our evaluation on this new dataset as well as established benchmarks demonstrates that SPaGS excels over state-of-the-art methods in terms of both rendering quality and speed.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70171","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiffNEG: A Differentiable Rasterization Framework for Online Aiming Optimization in Solar Power Tower Systems DiffNEG:一种用于太阳能发电塔系统在线瞄准优化的可微光栅化框架
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70166
Cangping Zheng, Xiaoxia Lin, Dongshuai Li, Yuhong Zhao, Jieqing Feng
{"title":"DiffNEG: A Differentiable Rasterization Framework for Online Aiming Optimization in Solar Power Tower Systems","authors":"Cangping Zheng,&nbsp;Xiaoxia Lin,&nbsp;Dongshuai Li,&nbsp;Yuhong Zhao,&nbsp;Jieqing Feng","doi":"10.1111/cgf.70166","DOIUrl":"https://doi.org/10.1111/cgf.70166","url":null,"abstract":"<p>Inverse rendering aims to infer scene parameters from observed images. In Solar Power Tower (SPT) systems, this corresponds to an aiming optimization problem—adjusting heliostats' orientations to shape the radiative flux density distribution (RFDD) on the receiver to conform to a desired distribution. The SPT system is widely favored in the field of renewable energy, where aiming optimization is crucial for ensuring its thermal efficiency and safety. However, traditional aiming optimization methods are inefficient and fail to meet online demands. In this paper, a novel optimization approach, DiffNEG, is proposed. DiffNEG introduces a differentiable rasterization method to model the reflected radiative flux of each heliostat as an elliptical Gaussian distribution. It leverages data-driven techniques to enhance simulation accuracy and employs automatic differentiation combined with gradient descent to achieve online, gradient-guided optimization in a continuous solution space. Experiments on a real large-scale heliostat field with nearly 30,000 heliostats demonstrate that DiffNEG can optimize within 10 seconds, improving efficiency by one order of magnitude compared to the latest DiffMCRT method and by three orders of magnitude compared to traditional heuristic methods, while also exhibiting superior robustness under both steady and transient state.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceived quality of BRDF models BRDF模型的感知质量
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70162
Behnaz Kavoosighafi, Rafał K. Mantiuk, Saghi Hajisharif, Ehsan Miandji, Jonas Unger
{"title":"Perceived quality of BRDF models","authors":"Behnaz Kavoosighafi,&nbsp;Rafał K. Mantiuk,&nbsp;Saghi Hajisharif,&nbsp;Ehsan Miandji,&nbsp;Jonas Unger","doi":"10.1111/cgf.70162","DOIUrl":"https://doi.org/10.1111/cgf.70162","url":null,"abstract":"<div>\u0000 <p>Material appearance is commonly modeled with the Bidirectional Reflectance Distribution Functions (BRDFs), which need to trade accuracy for complexity and storage cost. To investigate the current practices of BRDF modeling, we collect the first high dynamic range stereoscopic video dataset that captures the perceived quality degradation with respect to a number of parametric and non-parametric BRDF models. Our dataset shows that the current loss functions used to fit BRDF models, such as mean-squared error of logarithmic reflectance values, correlate poorly with the perceived quality of materials in rendered videos. We further show that quality metrics that compare rendered material samples give a significantly higher correlation with subjective quality judgments, and a simple Euclidean distance in the ITP color space (ΔE<sub>ITP</sub>) shows the highest correlation. Additionally, we investigate the use of different BRDF-space metrics as loss functions for fitting BRDF models and find that logarithmic mapping is the most effective approach for BRDF-space loss functions.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Level-of-detail Strand-based Rendering 基于线的实时细节级渲染
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70181
T. Huang, Y. Zhou, D. Lin, J. Zhu, L. Yan, K. Wu
{"title":"Real-time Level-of-detail Strand-based Rendering","authors":"T. Huang,&nbsp;Y. Zhou,&nbsp;D. Lin,&nbsp;J. Zhu,&nbsp;L. Yan,&nbsp;K. Wu","doi":"10.1111/cgf.70181","DOIUrl":"https://doi.org/10.1111/cgf.70181","url":null,"abstract":"<p>We present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a <i>13×</i> speedup.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization 艺术家- inator:基于文本的,有光泽的非真实感风格化
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70182
J. Daniel Subias, Saul Daniel-Soriano, Diego Gutierrez, Ana Serrano
{"title":"Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization","authors":"J. Daniel Subias,&nbsp;Saul Daniel-Soriano,&nbsp;Diego Gutierrez,&nbsp;Ana Serrano","doi":"10.1111/cgf.70182","DOIUrl":"https://doi.org/10.1111/cgf.70182","url":null,"abstract":"<div>\u0000 <p>Large diffusion models have made a remarkable leap synthesizing high-quality artistic images from text descriptions. However, these powerful pre-trained models still lack control to guide key material appearance properties, such as gloss. In this work, we present a threefold contribution: (1) we analyze how gloss is perceived across different artistic styles (i.e., oil painting, watercolor, ink pen, charcoal, and soft crayon); (2) we leverage our findings to create a dataset with 1,336,272 stylized images of many different geometries in all five styles, including automatically-computed text descriptions of their appearance (e.g., “A glossy bunny hand painted with an orange soft crayon”); and (3) we train ControlNet to condition Stable Diffusion XL synthesizing novel painterly depictions of new objects, using simple inputs such as edge maps, hand-drawn sketches, or clip arts. Compared to previous approaches, our framework yields more accurate results despite the simplified input, as we show both quantitative and qualitatively.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70182","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StructuReiser: A Structure-preserving Video Stylization Method StructuReiser:一种保留结构的视频样式化方法
IF 2.9 4区 计算机科学
Computer Graphics Forum Pub Date : 2025-07-24 DOI: 10.1111/cgf.70161
R. Spetlik, D. Futschik, D. Sýkora
{"title":"StructuReiser: A Structure-preserving Video Stylization Method","authors":"R. Spetlik,&nbsp;D. Futschik,&nbsp;D. Sýkora","doi":"10.1111/cgf.70161","DOIUrl":"https://doi.org/10.1111/cgf.70161","url":null,"abstract":"<div>\u0000 <p>We introduce StructuReiser, a novel video-to-video translation method that transforms input videos into stylized sequences using a set of user-provided keyframes. Unlike most existing methods, StructuReiser strictly adheres to the structural elements of the target video, preserving the original identity while seamlessly applying the desired stylistic transformations. This provides a level of control and consistency that is challenging to achieve with text-driven or keyframe-based approaches, including large video models. Furthermore, StructuReiser supports real-time inference on standard graphics hardware as well as custom keyframe editing, enabling interactive applications and expanding possibilities for creative expression and video manipulation.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"44 4","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.70161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144768043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信