ACM Transactions on Graphics最新文献

筛选
英文 中文
Learning Based Toolpath Planner on Diverse Graphs for 3D Printing 用于 3D 打印的基于学习的多样化图形工具路径规划器
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687933
Yuming Huang, Yuhu Guo, Renbo Su, Xingjian Han, Junhao Ding, Tianyu Zhang, Tao Liu, Weiming Wang, Guoxin Fang, Xu Song, Emily Whiting, Charlie Wang
{"title":"Learning Based Toolpath Planner on Diverse Graphs for 3D Printing","authors":"Yuming Huang, Yuhu Guo, Renbo Su, Xingjian Han, Junhao Ding, Tianyu Zhang, Tao Liu, Weiming Wang, Guoxin Fang, Xu Song, Emily Whiting, Charlie Wang","doi":"10.1145/3687933","DOIUrl":"https://doi.org/10.1145/3687933","url":null,"abstract":"This paper presents a learning based planner for computing optimized 3D printing toolpaths on prescribed graphs, the challenges of which include the varying graph structures on different models and the large scale of nodes &amp; edges on a graph. We adopt an on-the-fly strategy to tackle these challenges, formulating the planner as a <jats:italic>Deep Q-Network</jats:italic> (DQN) based optimizer to decide the next 'best' node to visit. We construct the state spaces by the <jats:italic>Local Search Graph</jats:italic> (LSG) centered at different nodes on a graph, which is encoded by a carefully designed algorithm so that LSGs in similar configurations can be identified to re-use the earlier learned DQN priors for accelerating the computation of toolpath planning. Our method can cover different 3D printing applications by defining their corresponding reward functions. Toolpath planning problems in wire-frame printing, continuous fiber printing, and metallic printing are selected to demonstrate its generality. The performance of our planner has been verified by testing the resultant toolpaths in physical experiments. By using our planner, wire-frame models with up to 4.2k struts can be successfully printed, up to 93.3% of sharp turns on continuous fiber toolpaths can be avoided, and the thermal distortion in metallic printing can be reduced by 24.9%.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"38 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volumetric Homogenization for Knitwear Simulation 用于针织品模拟的体积均质化技术
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687911
Chun Yuan, Haoyang Shi, Lei Lan, Yuxing Qiu, Cem Yuksel, Huamin Wang, Chenfanfu Jiang, Kui Wu, Yin Yang
{"title":"Volumetric Homogenization for Knitwear Simulation","authors":"Chun Yuan, Haoyang Shi, Lei Lan, Yuxing Qiu, Cem Yuksel, Huamin Wang, Chenfanfu Jiang, Kui Wu, Yin Yang","doi":"10.1145/3687911","DOIUrl":"https://doi.org/10.1145/3687911","url":null,"abstract":"This paper presents volumetric homogenization, a spatially varying homogenization scheme for knitwear simulation. We are motivated by the observation that macro-scale fabric dynamics is strongly correlated with its underlying knitting patterns. Therefore, homogenization towards a single material is less effective when the knitting is complex and non-repetitive. Our method tackles this challenge by homogenizing the yarn-level material locally at volumetric elements. Assigning a virtual volume of a knitting structure enables us to model bending and twisting effects via a simple volume-preserving penalty and thus effectively alleviates the material nonlinearity. We employ an adjoint Gauss-Newton formulation[Zehnder et al. 2021] to battle the dimensionality challenge of such per-element material optimization. This intuitive material model makes the forward simulation GPU-friendly. To this end, our pipeline also equips a novel domain-decomposed subspace solver crafted for GPU projective dynamics, which makes our simulator hundreds of times faster than the yarn-level simulator. Experiments validate the capability and effectiveness of volumetric homogenization. Our method produces realistic animations of knitwear matching the quality of full-scale yarn-level simulations. It is also orders of magnitude faster than existing homogenization techniques in both the training and simulation stages.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"22 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
All you need is rotation: Construction of developable strips 你需要的只是旋转:建设可开发地带
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687947
Takashi Maekawa, Felix Scholz
{"title":"All you need is rotation: Construction of developable strips","authors":"Takashi Maekawa, Felix Scholz","doi":"10.1145/3687947","DOIUrl":"https://doi.org/10.1145/3687947","url":null,"abstract":"We present a novel approach to generate developable strips along a space curve. The key idea of the new method is to use the rotation angle between the Frenet frame of the input space curve, and its Darboux frame of the curve on the resulting developable strip as a free design parameter, thereby revolving the strip around the tangential axis of the input space curve. This angle is not restricted to be constant but it can be any differentiable function defined on the curve, thereby creating a large design space of developable strips that share a common directrix curve. The range of possibilities for choosing the rotation angle is diverse, encompassing constant angles, linearly varying angles, sinusoidal patterns, and even solutions derived from initial value problems involving ordinary differential equations. This enables the potential of the proposed method to be used for a wide range of practical applications, spanning fields such as architectural design, industrial design, and papercraft modeling. In our computational and physical examples, we demonstrate the flexibility of the method by constructing, among others, toroidal and helical windmill blades for papercraft models, curved foldings, triply orthogonal structures, and developable strips featuring a log-aesthetic directrix curve.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"10 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes 3D 高斯光线追踪:快速追踪粒子场景
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687934
Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Riccardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, Zan Gojcic
{"title":"3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes","authors":"Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Riccardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, Zan Gojcic","doi":"10.1145/3687934","DOIUrl":"https://doi.org/10.1145/3687934","url":null,"abstract":"Particle-based representations of radiance fields such as 3D Gaussian Splatting have found great success for reconstructing and re-rendering of complex scenes. Most existing methods render particles via rasterization, projecting them to screen space tiles for processing in a sorted order. This work instead considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance GPU ray tracing hardware. To efficiently handle large numbers of semi-transparent particles, we describe a specialized rendering algorithm which encapsulates particles with bounding meshes to leverage fast ray-triangle intersections, and shades batches of intersections in depth-order. The benefits of ray tracing are well-known in computer graphics: processing incoherent rays for secondary lighting effects such as shadows and reflections, rendering from highly-distorted cameras common in robotics, stochastically sampling rays, and more. With our renderer, this flexibility comes at little cost compared to rasterization. Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision. We further propose related improvements to the basic Gaussian representation, including a simple use of generalized kernel functions which significantly reduces particle hit counts.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quark: Real-time, High-resolution, and General Neural View Synthesis 夸克实时、高分辨率和通用神经视图合成
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687953
John Flynn, Michael Broxton, Lukas Murmann, Lucy Chai, Matthew DuVall, Clément Godard, Kathryn Heal, Srinivas Kaza, Stephen Lombardi, Xuan Luo, Supreeth Achar, Kira Prabhu, Tiancheng Sun, Lynn Tsai, Ryan Overbeck
{"title":"Quark: Real-time, High-resolution, and General Neural View Synthesis","authors":"John Flynn, Michael Broxton, Lukas Murmann, Lucy Chai, Matthew DuVall, Clément Godard, Kathryn Heal, Srinivas Kaza, Stephen Lombardi, Xuan Luo, Supreeth Achar, Kira Prabhu, Tiancheng Sun, Lynn Tsai, Ryan Overbeck","doi":"10.1145/3687953","DOIUrl":"https://doi.org/10.1145/3687953","url":null,"abstract":"We present a novel neural algorithm for performing high-quality, highresolution, real-time novel view synthesis. From a sparse set of input RGB images or videos streams, our network both reconstructs the 3D scene and renders novel views at 1080p resolution at 30fps on an NVIDIA A100. Our feed-forward network generalizes across a wide variety of datasets and scenes and produces state-of-the-art quality for a real-time method. Our quality approaches, and in some cases surpasses, the quality of some of the top offline methods. In order to achieve these results we use a novel combination of several key concepts, and tie them together into a cohesive and effective algorithm. We build on previous works that represent the scene using semi-transparent layers and use an iterative learned render-and-refine approach to improve those layers. Instead of flat layers, our method reconstructs layered depth maps (LDMs) that efficiently represent scenes with complex depth and occlusions. The iterative update steps are embedded in a multi-scale, UNet-style architecture to perform as much compute as possible at reduced resolution. Within each update step, to better aggregate the information from multiple input views, we use a specialized Transformer-based network component. This allows the majority of the per-input image processing to be performed in the input image space, as opposed to layer space, further increasing efficiency. Finally, due to the real-time nature of our reconstruction and rendering, we dynamically create and discard the internal 3D geometry for each frame, generating the LDM for each view. Taken together, this produces a novel and effective algorithm for view synthesis. Through extensive evaluation, we demonstrate that we achieve state-of-the-art quality at real-time rates.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"14 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142672828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Kernel Regression for Consistent Monte Carlo Denoising 用于一致蒙特卡罗去噪的神经核回归
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687949
Pengju Qiao, Qi Wang, Yuchi Huo, Shiji Zhai, Zixuan Xie, Wei Hua, Hujun Bao, Tao Liu
{"title":"Neural Kernel Regression for Consistent Monte Carlo Denoising","authors":"Pengju Qiao, Qi Wang, Yuchi Huo, Shiji Zhai, Zixuan Xie, Wei Hua, Hujun Bao, Tao Liu","doi":"10.1145/3687949","DOIUrl":"https://doi.org/10.1145/3687949","url":null,"abstract":"Unbiased Monte Carlo path tracing that is extensively used in realistic rendering produces undesirable noise, especially with low samples per pixel (spp). Recently, several methods have coped with this problem by importing unbiased noisy images and auxiliary features to neural networks to either predict a fixed-sized kernel for convolution or directly predict the denoised result. Since it is impossible to produce arbitrarily high spp images as the training dataset, the network-based denoising fails to produce high-quality images under high spp. More specifically, network-based denoising is inconsistent and does not converge to the ground truth as the sampling rate increases. On the other hand, the post-correction estimators yield a blending coefficient for a pair of biased and unbiased images influenced by image errors or variances to ensure the consistency of the denoised image. As the sampling rate increases, the blending coefficient of the unbiased image converges to 1, that is, using the unbiased image as the denoised results. However, these estimators usually produce artifacts due to the difficulty of accurately predicting image errors or variances with low spp. To address the above problems, we take advantage of both kernel-predicting methods and post-correction denoisers. A novel kernel-based denoiser is proposed based on distribution-free kernel regression consistency theory, which does not explicitly combine the biased and unbiased results but constrain the kernel bandwidth to produce consistent results under high spp. Meanwhile, our kernel regression method explores bandwidth optimization in the robust auxiliary feature space instead of the noisy image space. This leads to consistent high-quality denoising at both low and high spp. Experiment results demonstrate that our method outperforms existing denoisers in accuracy and consistency.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"197 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Chebyshev Parameterization for Woven Fabric Modeling 用于织物建模的切比雪夫参数化
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687928
Annika Öhri, Aviv Segall, Jing Ren, Olga Sorkine-Hornung
{"title":"Chebyshev Parameterization for Woven Fabric Modeling","authors":"Annika Öhri, Aviv Segall, Jing Ren, Olga Sorkine-Hornung","doi":"10.1145/3687928","DOIUrl":"https://doi.org/10.1145/3687928","url":null,"abstract":"Distortion-minimizing surface parameterization is an essential step for computing 2D pieces necessary to fabricate a target 3D shape from flat material. Garment design and textile fabrication are a prominent application example. Common distortion measures quantify length, angle or area preservation in an isotropic manner, so that when applied to woven textile fabrication, they implicitly assume fabric behaves like paper, which is inextensible in all directions and does not permit shearing. However, woven fabric differs significantly from paper: it exhibits anisotropy along the yarn directions and allows for some degree of shearing. We propose a novel distortion energy based on Chebyshev nets that anisotropically penalizes shearing and stretching. Our energy formulation can be used as an optimization objective for surface parameterization and is simple to minimize via a local-global algorithm. We demonstrate its advantages in modeling nets or woven fabric behavior over the commonly used isotropic distortion energies.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"38 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometry-Aware Retargeting for Two-Skinned Characters Interaction 双皮肤角色交互的几何感知重定位
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687962
Inseo Jang, Soojin Choi, Seokhyeon Hong, Chaelin Kim, Junyong Noh
{"title":"Geometry-Aware Retargeting for Two-Skinned Characters Interaction","authors":"Inseo Jang, Soojin Choi, Seokhyeon Hong, Chaelin Kim, Junyong Noh","doi":"10.1145/3687962","DOIUrl":"https://doi.org/10.1145/3687962","url":null,"abstract":"Interactive motion between multiple characters is widely utilized in games and movies. However, the method for generating interactive motions considering the character's diverse mesh shape has yet to be studied. We propose a Spatio Cooperative Transformer (SCT) to retarget the interacting motions of two characters having arbitrary mesh connectivity. SCT predicts the residual of root position and joint rotations considering the shape difference between the source and target of interacting characters. In addition, we introduce an anchor loss function for SCT to maintain the geometric distance between the interacting characters when they are retargeted. We also propose a motion augmentation method with deformation-based adaptation to prepare a source-target paired dataset with an identical mesh connectivity for training. In experiments, our method achieved higher accuracy for semantic preservation and produced less artifacts of inter-penetration between the interacting characters for unseen characters and motions than the baselines. Moreover, we conducted a user evaluation using characters with various shapes, spanning low-to-high interaction levels to prove better semantic preservation of our method compared to previous studies.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"99 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle-Laden Fluid on Flow Maps 流动图上的含颗粒流体
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687916
Zhiqi Li, Duowen Chen, Candong Lin, Jinyuan Liu, Bo Zhu
{"title":"Particle-Laden Fluid on Flow Maps","authors":"Zhiqi Li, Duowen Chen, Candong Lin, Jinyuan Liu, Bo Zhu","doi":"10.1145/3687916","DOIUrl":"https://doi.org/10.1145/3687916","url":null,"abstract":"We propose a novel framework for simulating ink as a particle-laden flow using particle flow maps. Our method addresses the limitations of existing flow-map techniques, which struggle with dissipative forces like viscosity and drag, thereby extending the application scope from solving the Euler equations to solving the Navier-Stokes equations with accurate viscosity and laden-particle treatment. Our key contribution lies in a coupling mechanism for two particle systems, coupling physical sediment particles and virtual flow-map particles on a background grid by solving a Poisson system. We implemented a novel path integral formula to incorporate viscosity and drag forces into the particle flow map process. Our approach enables state-of-the-art simulation of various particle-laden flow phenomena, exemplified by the bulging and breakup of suspension drop tails, torus formation, torus disintegration, and the coalescence of sedimenting drops. In particular, our method delivered high-fidelity ink diffusion simulations by accurately capturing vortex bulbs, viscous tails, fractal branching, and hierarchical structures.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"35 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ToonCrafter: Generative Cartoon Interpolation ToonCrafter:生成式卡通插值
IF 6.2 1区 计算机科学
ACM Transactions on Graphics Pub Date : 2024-11-19 DOI: 10.1145/3687761
Jinbo Xing, Hanyuan Liu, Menghan Xia, Yong Zhang, Xintao Wang, Ying Shan, Tien-Tsin Wong
{"title":"ToonCrafter: Generative Cartoon Interpolation","authors":"Jinbo Xing, Hanyuan Liu, Menghan Xia, Yong Zhang, Xintao Wang, Ying Shan, Tien-Tsin Wong","doi":"10.1145/3687761","DOIUrl":"https://doi.org/10.1145/3687761","url":null,"abstract":"We introduce ToonCrafter, a novel approach that transcends traditional correspondence-based cartoon video interpolation, paving the way for generative interpolation. Traditional methods, that implicitly assume linear motion and the absence of complicated phenomena like dis-occlusion, often struggle with the exaggerated non-linear and large motions with occlusion commonly found in cartoons, resulting in implausible or even failed interpolation results. To overcome these limitations, we explore the potential of adapting live-action video priors to better suit cartoon interpolation within a generative framework. ToonCrafter effectively addresses the challenges faced when applying live-action video motion priors to generative cartoon interpolation. First, we design a toon rectification learning strategy that seamlessly adapts live-action video priors to the cartoon domain, resolving the domain gap and content leakage issues. Next, we introduce a dual-reference-based 3D decoder to compensate for lost details due to the highly compressed latent prior spaces, ensuring the preservation of fine details in interpolation results. Finally, we design a flexible sketch encoder that empowers users with interactive control over the interpolation results. Experimental results demonstrate that our proposed method not only produces visually convincing and more natural dynamics, but also effectively handles dis-occlusion. The comparative evaluation demonstrates the notable superiority of our approach over existing competitors. Code and model weights are available at https://doubiiu.github.io/projects/ToonCrafter","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"250 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142673120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信