Computer Graphics Forum最新文献

筛选
英文 中文
SketchAnim: Real-time sketch animation transfer from videos SketchAnim:从视频实时传输草图动画
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15176
Gaurav Rai, Shreyas Gupta, Ojaswa Sharma
{"title":"SketchAnim: Real-time sketch animation transfer from videos","authors":"Gaurav Rai,&nbsp;Shreyas Gupta,&nbsp;Ojaswa Sharma","doi":"10.1111/cgf.15176","DOIUrl":"https://doi.org/10.1111/cgf.15176","url":null,"abstract":"<p>Animation of hand-drawn sketches is an adorable art. It allows the animator to generate animations with expressive freedom and requires significant expertise. In this work, we introduce a novel sketch animation framework designed to address inherent challenges, such as motion extraction, motion transfer, and occlusion. The framework takes an exemplar video input featuring a moving object and utilizes a robust motion transfer technique to animate the input sketch. We show comparative evaluations that demonstrate the superior performance of our method over existing sketch animation techniques. Notably, our approach exhibits a higher level of user accessibility in contrast to conventional sketch-based animation systems, positioning it as a promising contributor to the field of sketch animation. https://graphics-research-group.github.io/SketchAnim/</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating a 3D Mesh in A-pose from a Single Image for Character Rigging 在 A-pose 中从单一图像创建 3D 网格,用于角色装配
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15177
Seunghwan Lee, C. Karen Liu
{"title":"Creating a 3D Mesh in A-pose from a Single Image for Character Rigging","authors":"Seunghwan Lee,&nbsp;C. Karen Liu","doi":"10.1111/cgf.15177","DOIUrl":"https://doi.org/10.1111/cgf.15177","url":null,"abstract":"<p>Learning-based methods for 3D content generation have shown great potential to create 3D characters from text prompts, videos, and images. However, current methods primarily focus on generating static 3D meshes, overlooking the crucial aspect of creating an animatable 3D meshes. Directly using 3D meshes generated by existing methods to create underlying skeletons for animation presents many challenges because the generated mesh might exhibit geometry artifacts or assume arbitrary poses that complicate the subsequent rigging process. This work proposes a new framework for generating a 3D animatable mesh from a single 2D image depicting the character. We do so by enforcing the generated 3D mesh to assume an A-pose, which can mitigate the geometry artifacts and facilitate the use of existing automatic rigging methods. Our approach aims to leverage the generative power of existing models across modalities without the need for new data or large-scale training. We evaluate the effectiveness of our framework with qualitative results, as well as ablation studies and quantitative comparisons with existing 3D mesh generation models.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Move Like Professional Counter-Strike Players 学习像职业反恐精英玩家那样移动
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15173
D. Durst, F. Xie, V. Sarukkai, B. Shacklett, I. Frosio, C. Tessler, J. Kim, C. Taylor, G. Bernstein, S. Choudhury, P. Hanrahan, K. Fatahalian
{"title":"Learning to Move Like Professional Counter-Strike Players","authors":"D. Durst,&nbsp;F. Xie,&nbsp;V. Sarukkai,&nbsp;B. Shacklett,&nbsp;I. Frosio,&nbsp;C. Tessler,&nbsp;J. Kim,&nbsp;C. Taylor,&nbsp;G. Bernstein,&nbsp;S. Choudhury,&nbsp;P. Hanrahan,&nbsp;K. Fatahalian","doi":"10.1111/cgf.15173","DOIUrl":"https://doi.org/10.1111/cgf.15173","url":null,"abstract":"<p>In multiplayer, first-person shooter games like Counter-Strike: Global Offensive (CS:GO), coordinated movement is a critical component of high-level strategic play. However, the complexity of team coordination and the variety of conditions present in popular game maps make it impractical to author hand-crafted movement policies for every scenario. We show that it is possible to take a data-driven approach to creating human-like movement controllers for CS:GO. We curate a team movement dataset comprising 123 hours of professional game play traces, and use this dataset to train a transformer-based movement model that generates human-like team movement for all players in a “Retakes” round of the game. Importantly, the movement prediction model is efficient. Performing inference for all players takes less than 0.5 ms per game step (amortized cost) on a single CPU core, making it plausible for use in commercial games today. Human evaluators assess that our model behaves more like humans than both commercially-available bots and procedural movement controllers scripted by experts (16% to 59% higher by TrueSkill rating of “human-like”). Using experiments involving in-game bot vs. bot self-play, we demonstrate that our model performs simple forms of teamwork, makes fewer common movement mistakes, and yields movement distributions, player lifetimes, and kill locations similar to those observed in professional CS:GO match play.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reactive Gaze during Locomotion in Natural Environments 自然环境中运动时的反应性凝视
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15168
J. K. Melgare, D. Rohmer, S. R. Musse, M-P. Cani
{"title":"Reactive Gaze during Locomotion in Natural Environments","authors":"J. K. Melgare,&nbsp;D. Rohmer,&nbsp;S. R. Musse,&nbsp;M-P. Cani","doi":"10.1111/cgf.15168","DOIUrl":"https://doi.org/10.1111/cgf.15168","url":null,"abstract":"<p>Animating gaze behavior is crucial for creating believable virtual characters, providing insights into their perception and interaction with the environment. In this paper, we present an efficient yet natural-looking gaze animation model applicable to real-time walking characters exploring natural environments. We address the challenge of dynamic gaze adaptation by combining findings from neuroscience with a data-driven saliency model. Specifically, our model determines gaze focus by considering the character's locomotion, environment stimuli, and terrain conditions. Our model is compatible with both automatic navigation through pre-defined character trajectories and user-guided interactive locomotion, and can be configured according to the desired degree of visual exploration of the environment. Our perceptual evaluation shows that our solution significantly improves the state-of-the-art saliency-based gaze animation with respect to the character's apparent awareness of the environment, the naturalness of the motion, and the elements to which it pays attention.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters VMP:用于可靠跟踪物理字符运动的多功能运动先验器
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15175
Agon Serifi, Ruben Grandia, Espen Knoop, Markus Gross, Moritz Bächer
{"title":"VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters","authors":"Agon Serifi,&nbsp;Ruben Grandia,&nbsp;Espen Knoop,&nbsp;Markus Gross,&nbsp;Moritz Bächer","doi":"10.1111/cgf.15175","DOIUrl":"https://doi.org/10.1111/cgf.15175","url":null,"abstract":"<p>Recent progress in physics-based character control has made it possible to learn policies from unstructured motion data. However, it remains challenging to train a single control policy that works with diverse and unseen motions, and can be deployed to real-world physical robots. In this paper, we propose a two-stage technique that enables the control of a character with a full-body kinematic motion reference, with a focus on imitation accuracy. In a first stage, we extract a latent space encoding by training a variational autoencoder, taking short windows of motion from unstructured data as input. We then use the embedding from the time-varying latent code to train a conditional policy in a second stage, providing a mapping from kinematic input to dynamics-aware output. By keeping the two stages separate, we benefit from self-supervised methods to get better latent codes and explicit imitation rewards to avoid mode collapse. We demonstrate the efficiency and robustness of our method in simulation, with unseen user-specified motions, and on a bipedal robot, where we bring dynamic motions to the real world.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Garment Animation NeRF with Color Editing 带有色彩编辑功能的服装动画 NeRF
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15178
Renke Wang, Meng Zhang, Jun Li, Jian Yang
{"title":"Garment Animation NeRF with Color Editing","authors":"Renke Wang,&nbsp;Meng Zhang,&nbsp;Jun Li,&nbsp;Jian Yang","doi":"10.1111/cgf.15178","DOIUrl":"https://doi.org/10.1111/cgf.15178","url":null,"abstract":"<p>Generating high-fidelity garment animations through traditional workflows, from modeling to rendering, is both tedious and expensive. These workflows often require repetitive steps in response to updates in character motion, rendering viewpoint changes, or appearance edits. Although recent neural rendering offers an efficient solution for computationally intensive processes, it struggles with rendering complex garment animations containing fine wrinkle details and realistic garment-and-body occlusions, while maintaining structural consistency across frames and dense view rendering. In this paper, we propose a novel approach to directly synthesize garment animations from body motion sequences without the need for an explicit garment proxy. Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure. Simultaneously, we capture detailed features from synthesized reference images of the garment's front and back, generated by a pre-trained image model. These features are then used to construct a neural radiance field that renders the garment animation video. Additionally, our technique enables garment recoloring by decomposing its visual elements. We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency. Furthermore, we showcase its applicability to color editing on both real and synthetic garment data. Compared to existing neural rendering techniques, our method exhibits qualitative and quantitative improvements in garment dynamics and wrinkle detail modeling. Code is available at https://github.com/wrk226/GarmentAnimationNeRF.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior 从姿势到动作:利用姿势先验进行跨域运动重定位
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15170
Qingqing Zhao, Peizhuo Li, Wang Yifan, Sorkine-Hornung Olga, Gordon Wetzstein
{"title":"Pose-to-Motion: Cross-Domain Motion Retargeting with Pose Prior","authors":"Qingqing Zhao,&nbsp;Peizhuo Li,&nbsp;Wang Yifan,&nbsp;Sorkine-Hornung Olga,&nbsp;Gordon Wetzstein","doi":"10.1111/cgf.15170","DOIUrl":"https://doi.org/10.1111/cgf.15170","url":null,"abstract":"<div>\u0000 <p>Creating plausible motions for a diverse range of characters is a long-standing goal in computer graphics. Current learning-based motion synthesis methods rely on large-scale motion datasets, which are often difficult if not impossible to acquire. On the other hand, pose data is more accessible, since static posed characters are easier to create and can even be extracted from images using recent advancements in computer vision. In this paper, we tap into this alternative data source and introduce a neural motion synthesis approach through retargeting, which generates plausible motion of various characters that only have pose data by transferring motion from one single existing motion capture dataset of another drastically different characters. Our experiments show that our method effectively combines the motion features of the source character with the pose features of the target character, and performs robustly with small or noisy pose data sets, ranging from a few artist-created poses to noisy poses estimated directly from images. Additionally, a conducted user study indicated that a majority of participants found our retargeted motion to be more enjoyable to watch, more lifelike in appearance, and exhibiting fewer artifacts. Our code and dataset can be accessed here.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15170","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Long-term Motion In-betweening via Keyframe Prediction 通过关键帧预测实现长期运动间隔
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15171
Seokhyeon Hong, Haemin Kim, Kyungmin Cho, Junyong Noh
{"title":"Long-term Motion In-betweening via Keyframe Prediction","authors":"Seokhyeon Hong,&nbsp;Haemin Kim,&nbsp;Kyungmin Cho,&nbsp;Junyong Noh","doi":"10.1111/cgf.15171","DOIUrl":"https://doi.org/10.1111/cgf.15171","url":null,"abstract":"<p>Motion in-betweening has emerged as a promising approach to enhance the efficiency of motion creation due to its flexibility and time performance. However, previous in-betweening methods are limited to generating short transitions due to growing pose ambiguity when the number of missing frames increases. This length-related constraint makes the optimization hard and it further causes another constraint on the target pose, limiting the degrees of freedom for artists to use. In this paper, we introduce a keyframe-driven approach that effectively solves the pose ambiguity problem, allowing robust in-betweening performance on various lengths of missing frames. To incorporate keyframe-driven motion synthesis, we introduce a keyframe score that measures the likelihood of a frame being used as a keyframe as well as an adaptive keyframe selection method that maintains appropriate temporal distances between resulting keyframes. Additionally, we employ phase manifolds to further resolve the pose ambiguity and incorporate trajectory conditions to guide the approximate movement of the character. Comprehensive evaluations, encompassing both quantitative and qualitative analyses, were conducted to compare our method with state-of-the-art in-betweening approaches across various transition lengths. The code for the paper is available at https://github.com/seokhyeonhong/long-mib</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PartwiseMPC: Interactive Control of Contact-Guided Motions PartwiseMPC:接触引导运动的交互式控制
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15174
N. Khoshsiyar, R. Gou, T. Zhou, S. Andrews, M. van de Panne
{"title":"PartwiseMPC: Interactive Control of Contact-Guided Motions","authors":"N. Khoshsiyar,&nbsp;R. Gou,&nbsp;T. Zhou,&nbsp;S. Andrews,&nbsp;M. van de Panne","doi":"10.1111/cgf.15174","DOIUrl":"https://doi.org/10.1111/cgf.15174","url":null,"abstract":"<p>Physics-based character motions remain difficult to create and control. We make two contributions towards simpler specification and faster generation of physics-based control. First, we introduce a novel partwise model predictive control (MPC) method that exploits independent planning for body parts when this proves beneficial, while defaulting to whole-body motion planning when that proves to be more effective. Second, we introduce a new approach to motion specification, based on specifying an ordered set of contact keyframes. These each specify a small number of pairwise contacts between the body and the environment, and serve as loose specifications of motion strategies. Unlike regular keyframes or traditional trajectory optimization constraints, they are heavily under-constrained and have flexible timing. We demonstrate a range of challenging contact-rich motions that can be generated online at interactive rates using this framework. We further show the generalization capabilities of the method.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstruction of implicit surfaces from fluid particles using convolutional neural networks 利用卷积神经网络重构流体颗粒的隐含表面
IF 2.7 4区 计算机科学
Computer Graphics Forum Pub Date : 2024-10-09 DOI: 10.1111/cgf.15181
C. Zhao, T. Shinar, C. Schroeder
{"title":"Reconstruction of implicit surfaces from fluid particles using convolutional neural networks","authors":"C. Zhao,&nbsp;T. Shinar,&nbsp;C. Schroeder","doi":"10.1111/cgf.15181","DOIUrl":"https://doi.org/10.1111/cgf.15181","url":null,"abstract":"<div>\u0000 <p>In this paper, we present a novel network-based approach for reconstructing signed distance functions from fluid particles. The method uses a weighting kernel to transfer particles to a regular grid, which forms the input to a convolutional neural network. We propose a regression-based regularization to reduce surface noise without penalizing high-curvature features. The reconstruction exhibits improved spatial surface smoothness and temporal coherence compared with existing state of the art surface reconstruction methods. The method is insensitive to particle sampling density and robustly handles thin features, isolated particles, and sharp edges.</p>\u0000 </div>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15181","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142707475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信