A Semantic Talking Style Space for Speech-driven Facial Animation.

IF 6.5
Yujin Chai, Yanlin Weng, Tianjia Shao, Kun Zhou
{"title":"A Semantic Talking Style Space for Speech-driven Facial Animation.","authors":"Yujin Chai, Yanlin Weng, Tianjia Shao, Kun Zhou","doi":"10.1109/TVCG.2025.3615390","DOIUrl":null,"url":null,"abstract":"<p><p>We present a latent talking style space with semantic meanings for speech-driven 3D facial animation. The style space is learned from 3D speech facial animations via a self-supervision paradigm without any style labeling, leading to an automatic separation of high-level attributes, i.e., different channels of the latent style code possess different semantic meanings, such as a wide/slightly open mouth, a grinning/round mouth, and frowning/raising eyebrows. The style space enables intuitive and flexible control of talking styles in speech-driven facial animation through manipulating the channels of style code. To effectively learn such a style space, we propose a two-stage approach, involving two deep neural networks, to disentangle the person identity, speech content, and talking style contained in 3D speech facial animations. The training is performed on a novel dataset of 3D talking faces of various styles, constructed from over ten hours of videos of 200 subjects collected from the Internet.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":6.5000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on visualization and computer graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TVCG.2025.3615390","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We present a latent talking style space with semantic meanings for speech-driven 3D facial animation. The style space is learned from 3D speech facial animations via a self-supervision paradigm without any style labeling, leading to an automatic separation of high-level attributes, i.e., different channels of the latent style code possess different semantic meanings, such as a wide/slightly open mouth, a grinning/round mouth, and frowning/raising eyebrows. The style space enables intuitive and flexible control of talking styles in speech-driven facial animation through manipulating the channels of style code. To effectively learn such a style space, we propose a two-stage approach, involving two deep neural networks, to disentangle the person identity, speech content, and talking style contained in 3D speech facial animations. The training is performed on a novel dataset of 3D talking faces of various styles, constructed from over ten hours of videos of 200 subjects collected from the Internet.

语音驱动面部动画的语义谈话风格空间。
提出了一种具有语义的潜在说话风格空间,用于语音驱动的三维面部动画。风格空间从三维语音面部动画中学习,采用自监督范式,不做任何风格标注,自动分离高级属性,即不同通道的潜在风格代码具有不同的语义含义,如嘴巴大/微张,咧嘴笑/圆嘴,皱眉/扬眉。风格空间通过操纵风格代码的通道,可以直观灵活地控制语音驱动面部动画中的说话风格。为了有效地学习这种风格空间,我们提出了一种涉及两个深度神经网络的两阶段方法,以解开3D语音面部动画中包含的人的身份、语音内容和说话风格。训练是在一个新颖的各种风格的3D说话脸数据集上进行的,该数据集是由从互联网上收集的200个主题的10多个小时的视频构建而成的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信