计算机科学最新文献

筛选
英文 中文
Appearance-Preserved Portrait-to-Anime Translation via Proxy-Guided Domain Adaptation. 通过 "位置引导的领域适应 "实现肖像到动画的外观保留翻译。
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2022.3228707
Wenpeng Xiao, Cheng Xu, Jiajie Mai, Xuemiao Xu, Yue Li, Chengze Li, Xueting Liu, Shengfeng He
{"title":"Appearance-Preserved Portrait-to-Anime Translation via Proxy-Guided Domain Adaptation.","authors":"Wenpeng Xiao, Cheng Xu, Jiajie Mai, Xuemiao Xu, Yue Li, Chengze Li, Xueting Liu, Shengfeng He","doi":"10.1109/TVCG.2022.3228707","DOIUrl":"10.1109/TVCG.2022.3228707","url":null,"abstract":"<p><p>Converting a human portrait to anime style is a desirable but challenging problem. Existing methods fail to resolve this problem due to the large inherent gap between two domains that cannot be overcome by a simple direct mapping. For this reason, these methods struggle to preserve the appearance features in the original photo. In this article, we discover an intermediate domain, the coser portrait (portraits of humans costuming as anime characters), that helps bridge this gap. It alleviates the learning ambiguity and loosens the mapping difficulty in a progressive manner. Specifically, we start from learning the mapping between coser and anime portraits, and present a proxy-guided domain adaptation learning scheme with three progressive adaptation stages to shift the initial model to the human portrait domain. In this way, our model can generate visually pleasant anime portraits with well-preserved appearances given the human portrait. Our model adopts a disentangled design by breaking down the translation problem into two specific subtasks of face deformation and portrait stylization. This further elevates the generation quality. Extensive experimental results show that our model can achieve visually compelling translation with better appearance preservation and perform favorably against the existing methods both qualitatively and quantitatively. Our code and datasets are available at https://github.com/NeverGiveU/PDA-Translation.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9247236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatio-Temporal Visual Analysis of Turbulent Superstructures in Unsteady Flow. 非稳定流中湍流上层建筑的时空视觉分析。
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2022.3232367
Behdad Ghaffari, Davide Gatti, Rudiger Westermann
{"title":"Spatio-Temporal Visual Analysis of Turbulent Superstructures in Unsteady Flow.","authors":"Behdad Ghaffari, Davide Gatti, Rudiger Westermann","doi":"10.1109/TVCG.2022.3232367","DOIUrl":"10.1109/TVCG.2022.3232367","url":null,"abstract":"<p><p>The large-scale motions in 3D turbulent channel flows, known as Turbulent Superstructures (TSS), play an essential role in the dynamics of small-scale structures within the turbulent boundary layer. However, as of today, there is no common agreement on the spatial and temporal relationships between these multiscale structures. We propose a novel space-time visualization technique for analyzing the temporal evolution of these multiscale structures in their spatial context and, thus, to further shed light on the conceptually different explanations of their dynamics. Since the temporal dynamics of TSS are believed to influence the structures in the turbulent boundary layer, we propose a combination of a 2D space-time velocity plot with an orthogonal 2D plot of projected 3D flow structures, which can interactively span the time and the space axis. Besides flow structures indicating the fluid motion, we propose showing the variations in derived fields as an additional source of explanation. The relationships between the structures in different spatial and temporal scales can be more effectively resolved by using various filtering operations and image registration algorithms. To reduce the information loss due to the non-injective nature of projection, spatial information is encoded into transparency or color. Since the proposed visualization is heavily demanding computational resources and memory bandwidth to stream unsteady flow fields and instantly compute derived 3D flow structures, the implementation exploits data compression, parallel computation capabilities, and high memory bandwidth on recent GPUs via the CUDA compute library.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9247686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Parametric Design Method for Engraving Patterns on Thin Shells. 在薄壳上雕刻图案的参数设计方法
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2023.3240503
Jiangbei Hu, Shengfa Wang, Ying He, Zhongxuan Luo, Na Lei, Ligang Liu
{"title":"A Parametric Design Method for Engraving Patterns on Thin Shells.","authors":"Jiangbei Hu, Shengfa Wang, Ying He, Zhongxuan Luo, Na Lei, Ligang Liu","doi":"10.1109/TVCG.2023.3240503","DOIUrl":"10.1109/TVCG.2023.3240503","url":null,"abstract":"<p><p>Designing thin-shell structures that are diverse, lightweight, and physically viable is a challenging task for traditional heuristic methods. To address this challenge, we present a novel parametric design framework for engraving regular, irregular, and customized patterns on thin-shell structures. Our method optimizes pattern parameters such as size and orientation, to ensure structural stiffness while minimizing material consumption. Our method is unique in that it works directly with shapes and patterns represented by functions, and can engrave patterns through simple function operations. By eliminating the need for remeshing in traditional FEM methods, our method is more computationally efficient in optimizing mechanical properties and can significantly increase the diversity of shell structure design. Quantitative evaluation confirms the convergence of the proposed method. We conduct experiments on regular, irregular, and customized patterns and present 3D printed results to demonstrate the effectiveness of our approach.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9252928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Magic Furniture: Design Paradigm of Multi-Function Assembly. 神奇家具:多功能组装设计范例。
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2023.3250488
Qiang Fu, Fan Zhang, Xueming Li, Hongbo Fu
{"title":"Magic Furniture: Design Paradigm of Multi-Function Assembly.","authors":"Qiang Fu, Fan Zhang, Xueming Li, Hongbo Fu","doi":"10.1109/TVCG.2023.3250488","DOIUrl":"10.1109/TVCG.2023.3250488","url":null,"abstract":"<p><p>Assembly-based furniture with movable parts enables shape and structure reconfiguration, thus supporting multiple functions. Although a few attempts have been made for facilitating the creation of multi-function objects, designing such a multi-function assembly with the existing solutions often requires high imagination of designers. We develop the Magic Furniture system for users to easily create such designs simply given multiple cross-category objects. Our system automatically leverages the given objects as references to generate a 3D model with movable boards driven by back-and-forth movement mechanisms. By controlling the states of these mechanisms, a designed multi-function furniture object can be reconfigured to approximate the shapes and functions of the given objects. To ensure the designed furniture easy to transform between different functions, we perform an optimization algorithm to choose a proper number of movable boards and determine their shapes and sizes, following a set of design guidelines. We demonstrate the effectiveness of our system through various multi-function furniture designed with different sets of reference inputs and various movement constraints. We also evaluate the design results through several experiments including comparative and user studies.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9260517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeRC: Rendering Planar Caustics by Learning Implicit Neural Representations. NeRC:通过学习隐含神经表征渲染平面虚像。
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2023.3259382
Jiaxiong Qiu, Ze-Xin Yin, Ming-Ming Cheng, Bo Ren
{"title":"NeRC: Rendering Planar Caustics by Learning Implicit Neural Representations.","authors":"Jiaxiong Qiu, Ze-Xin Yin, Ming-Ming Cheng, Bo Ren","doi":"10.1109/TVCG.2023.3259382","DOIUrl":"10.1109/TVCG.2023.3259382","url":null,"abstract":"<p><p>Caustics are challenging light transport effects for photo-realistic rendering. Photon mapping techniques play a fundamental role in rendering caustics. However, photon mapping methods render single caustics under the stationary light source in a fixed scene view. They require significant storage and computing resources to produce high-quality results. In this paper, we propose efficiently rendering more diverse caustics of a scene with the camera and the light source moving. We present a novel learning-based volume rendering approach with implicit representations for our proposed task. Considering the variety of materials and textures of planar caustic receivers, we decompose the output appearance into two components: the diffuse and specular parts with a probabilistic module. Unlike NeRF, we construct weights for rendering each component from the implicit signed distance function (SDF). Moreover, we introduce the centering calibration and the sine activation function to improve the performance of the color prediction network. Extensive experiments on the synthetic and real-world datasets illustrate that our method achieves much better performance than baselines in the quantitative and qualitative comparison, for rendering caustics in novel views with the dynamic light source. Especially, our method outperforms the baseline on the temporal consistency across frames.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9594790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Softness Perception of Visual Objects Controlled by Touchless Inputs: The Role of Effective Distance of Hand Movements. 通过无触摸输入控制视觉物体的柔软度感知:手部动作有效距离的作用
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2023.3254522
Takahiro Kawabe, Yusuke Ujitoko
{"title":"Softness Perception of Visual Objects Controlled by Touchless Inputs: The Role of Effective Distance of Hand Movements.","authors":"Takahiro Kawabe, Yusuke Ujitoko","doi":"10.1109/TVCG.2023.3254522","DOIUrl":"10.1109/TVCG.2023.3254522","url":null,"abstract":"<p><p>Feedback on the material properties of a visual object is essential in enhancing the users' perceptual experience of the object when users control the object with touchless inputs. Focusing on the softness perception of the object, we examined how the effective distance of hand movements influenced the degree of the object's softness perceived by users. In the experiments, participants moved their right hand in front of a camera which tracked their hand position. A textured 2D or 3D object on display deformed depending on the participant's hand position. In addition to establishing a ratio of deformation magnitude to the distance of hand movements, we altered the effective distance of hand movement, within which the hand movement could deform the object. Participants rated the strength of perceived softness (Experiments 1 and 2) and other perceptual impressions (Experiment 3). A longer effective distance produced a softer impression of the 2D and 3D objects. The saturation speed of object deformation due to the effective distance was not a critical determinant. The effective distance also modulated other perceptual impressions than softness. The role of the effective distance of hand movements on perceptual impressions of objects under touchless control is discussed.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9614749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MobileSky: Real-Time Sky Replacement for Mobile AR. MobileSky:移动 AR 的实时天空替代技术。
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2023.3257840
Xinjie Wang, Qingxuan Lv, Guo Chen, Jing Zhang, Zhiqiang Wei, Junyu Dong, Hongbo Fu, Zhipeng Zhu, Jingxin Liu, Xiaogang Jin
{"title":"MobileSky: Real-Time Sky Replacement for Mobile AR.","authors":"Xinjie Wang, Qingxuan Lv, Guo Chen, Jing Zhang, Zhiqiang Wei, Junyu Dong, Hongbo Fu, Zhipeng Zhu, Jingxin Liu, Xiaogang Jin","doi":"10.1109/TVCG.2023.3257840","DOIUrl":"10.1109/TVCG.2023.3257840","url":null,"abstract":"<p><p>We present MobileSky, the first automatic method for real-time high-quality sky replacement for mobile AR applications. The primary challenge of this task is how to extract sky regions in camera feed both quickly and accurately. While the problem of sky replacement is not new, previous methods mainly concern extraction quality rather than efficiency, limiting their application to our task. We aim to provide higher quality, both spatially and temporally consistent sky mask maps for all camera frames in real time. To this end, we develop a novel framework that combines a new deep semantic network called FSNet with novel post-processing refinement steps. By leveraging IMU data, we also propose new sky-aware constraints such as temporal consistency, position consistency, and color consistency to help refine the weakly classified part of the segmentation output. Experiments show that our method achieves an average of around 30 FPS on off-the-shelf smartphones and outperforms the state-of-the-art sky replacement methods in terms of execution speed and quality. In the meantime, our mask maps appear to be visually more stable across frames. Our fast sky replacement method enables several applications, such as AR advertising, art making, generating fantasy celestial objects, visually learning about weather phenomena, and advanced video-based visual effects. To facilitate future research, we also create a new video dataset containing annotated sky regions with IMU data.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9615313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Laplacian2Mesh: Laplacian-Based Mesh Understanding. Laplacian2Mesh:基于拉普拉斯的网格理解。
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2023.3259044
Qiujie Dong, Zixiong Wang, Manyi Li, Junjie Gao, Shuangmin Chen, Zhenyu Shu, Shiqing Xin, Changhe Tu, Wenping Wang
{"title":"Laplacian2Mesh: Laplacian-Based Mesh Understanding.","authors":"Qiujie Dong, Zixiong Wang, Manyi Li, Junjie Gao, Shuangmin Chen, Zhenyu Shu, Shiqing Xin, Changhe Tu, Wenping Wang","doi":"10.1109/TVCG.2023.3259044","DOIUrl":"10.1109/TVCG.2023.3259044","url":null,"abstract":"<p><p>Geometric deep learning has sparked a rising interest in computer graphics to perform shape understanding tasks, such as shape classification and semantic segmentation. When the input is a polygonal surface, one has to suffer from the irregular mesh structure. Motivated by the geometric spectral theory, we introduce Laplacian2Mesh, a novel and flexible convolutional neural network (CNN) framework for coping with irregular triangle meshes (vertices may have any valence). By mapping the input mesh surface to the multi-dimensional Laplacian-Beltrami space, Laplacian2Mesh enables one to perform shape analysis tasks directly using the mature CNNs, without the need to deal with the irregular connectivity of the mesh structure. We further define a mesh pooling operation such that the receptive field of the network can be expanded while retaining the original vertex set as well as the connections between them. Besides, we introduce a channel-wise self-attention block to learn the individual importance of feature ingredients. Laplacian2Mesh not only decouples the geometry from the irregular connectivity of the mesh structure but also better captures the global features that are central to shape classification and segmentation. Extensive tests on various datasets demonstrate the effectiveness and efficiency of Laplacian2Mesh, particularly in terms of the capability of being vulnerable to noise to fulfill various learning tasks.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9615316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Exploration With Embedding-Guided Layouts. 利用嵌入式引导布局进行图形探索
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2023.3238909
Leixian Shen, Zhiwei Tai, Enya Shen, Jianmin Wang
{"title":"Graph Exploration With Embedding-Guided Layouts.","authors":"Leixian Shen, Zhiwei Tai, Enya Shen, Jianmin Wang","doi":"10.1109/TVCG.2023.3238909","DOIUrl":"10.1109/TVCG.2023.3238909","url":null,"abstract":"<p><p>Node-link diagrams are widely used to visualize graphs. Most graph layout algorithms only use graph topology for aesthetic goals (e.g., minimize node occlusions and edge crossings) or use node attributes for exploration goals (e.g., preserve visible communities). Existing hybrid methods that bind the two perspectives still suffer from various generation restrictions (e.g., limited input types and required manual adjustments and prior knowledge of graphs) and the imbalance between aesthetic and exploration goals. In this article, we propose a flexible embedding-based graph exploration pipeline to enjoy the best of both graph topology and node attributes. First, we leverage embedding algorithms for attributed graphs to encode the two perspectives into latent space. Then, we present an embedding-driven graph layout algorithm, GEGraph, which can achieve aesthetic layouts with better community preservation to support an easy interpretation of the graph structure. Next, graph explorations are extended based on the generated graph layout and insights extracted from the embedding vectors. Illustrated with examples, we build a layout-preserving aggregation method with Focus+Context interaction and a related nodes searching approach with multiple proximity strategies. Finally, we conduct quantitative and qualitative evaluations, a user study, and two case studies to validate our approach.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9621993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keyframe Control of Music-Driven 3D Dance Generation. 音乐驱动 3D 舞蹈生成的关键帧控制
IF 4.7 1区 计算机科学
IEEE Transactions on Visualization and Computer Graphics Pub Date : 2024-07-01 Epub Date: 2024-06-27 DOI: 10.1109/TVCG.2023.3235538
Zhipeng Yang, Yu-Hui Wen, Shu-Yu Chen, Xiao Liu, Yuan Gao, Yong-Jin Liu, Lin Gao, Hongbo Fu
{"title":"Keyframe Control of Music-Driven 3D Dance Generation.","authors":"Zhipeng Yang, Yu-Hui Wen, Shu-Yu Chen, Xiao Liu, Yuan Gao, Yong-Jin Liu, Lin Gao, Hongbo Fu","doi":"10.1109/TVCG.2023.3235538","DOIUrl":"10.1109/TVCG.2023.3235538","url":null,"abstract":"<p><p>For 3D animators, choreography with artificial intelligence has attracted more attention recently. However, most existing deep learning methods mainly rely on music for dance generation and lack sufficient control over generated dance motions. To address this issue, we introduce the idea of keyframe interpolation for music-driven dance generation and present a novel transition generation technique for choreography. Specifically, this technique synthesizes visually diverse and plausible dance motions by using normalizing flows to learn the probability distribution of dance motions conditioned on a piece of music and a sparse set of key poses. Thus, the generated dance motions respect both the input musical beats and the key poses. To achieve a robust transition of varying lengths between the key poses, we introduce a time embedding at each timestep as an additional condition. Extensive experiments show that our model generates more realistic, diverse, and beat-matching dance motions than the compared state-of-the-art methods, both qualitatively and quantitatively. Our experimental results demonstrate the superiority of the keyframe-based control for improving the diversity of the generated dance motions.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9621985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信