Analyzing Input and Output Representations for Speech-Driven Gesture Generation

Taras Kucherenko, Dai Hasegawa, G. Henter, Naoshi Kaneko, H. Kjellström
{"title":"Analyzing Input and Output Representations for Speech-Driven Gesture Generation","authors":"Taras Kucherenko, Dai Hasegawa, G. Henter, Naoshi Kaneko, H. Kjellström","doi":"10.1145/3308532.3329472","DOIUrl":null,"url":null,"abstract":"This paper presents a novel framework for automatic speech-driven gesture generation, applicable to human-agent interaction including both virtual agents and robots. Specifically, we extend recent deep-learning-based, data-driven methods for speech-driven gesture generation by incorporating representation learning. Our model takes speech as input and produces gestures as output, in the form of a sequence of 3D coordinates. Our approach consists of two steps. First, we learn a lower-dimensional representation of human motion using a denoising autoencoder neural network, consisting of a motion encoder MotionE and a motion decoder MotionD. The learned representation preserves the most important aspects of the human pose variation while removing less relevant variation. Second, we train a novel encoder network SpeechE to map from speech to a corresponding motion representation with reduced dimensionality. At test time, the speech encoder and the motion decoder networks are combined: SpeechE predicts motion representations based on a given speech signal and MotionD then decodes these representations to produce motion sequences. We evaluate different representation sizes in order to find the most effective dimensionality for the representation. We also evaluate the effects of using different speech features as input to the model. We find that mel-frequency cepstral coefficients (MFCCs), alone or combined with prosodic features, perform the best. The results of a subsequent user study confirm the benefits of the representation learning.","PeriodicalId":112642,"journal":{"name":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"112","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3308532.3329472","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 112

Abstract

This paper presents a novel framework for automatic speech-driven gesture generation, applicable to human-agent interaction including both virtual agents and robots. Specifically, we extend recent deep-learning-based, data-driven methods for speech-driven gesture generation by incorporating representation learning. Our model takes speech as input and produces gestures as output, in the form of a sequence of 3D coordinates. Our approach consists of two steps. First, we learn a lower-dimensional representation of human motion using a denoising autoencoder neural network, consisting of a motion encoder MotionE and a motion decoder MotionD. The learned representation preserves the most important aspects of the human pose variation while removing less relevant variation. Second, we train a novel encoder network SpeechE to map from speech to a corresponding motion representation with reduced dimensionality. At test time, the speech encoder and the motion decoder networks are combined: SpeechE predicts motion representations based on a given speech signal and MotionD then decodes these representations to produce motion sequences. We evaluate different representation sizes in order to find the most effective dimensionality for the representation. We also evaluate the effects of using different speech features as input to the model. We find that mel-frequency cepstral coefficients (MFCCs), alone or combined with prosodic features, perform the best. The results of a subsequent user study confirm the benefits of the representation learning.
语音驱动手势生成的输入输出表征分析
本文提出了一种新的语音驱动手势自动生成框架,适用于包括虚拟智能体和机器人在内的人机交互。具体来说,我们通过结合表示学习扩展了最近基于深度学习的数据驱动方法,用于语音驱动手势生成。我们的模型将语音作为输入,并以3D坐标序列的形式产生手势作为输出。我们的方法包括两个步骤。首先,我们使用去噪自编码器神经网络学习人体运动的低维表示,该网络由运动编码器MotionE和运动解码器MotionD组成。学习表征保留了人体姿势变化的最重要方面,同时删除了不太相关的变化。其次,我们训练了一个新的编码器网络SpeechE,将语音映射到相应的降维运动表示。在测试时,将语音编码器和运动解码器网络结合起来:基于给定的语音信号,SpeechE预测运动表示,然后对这些表示进行解码以产生运动序列。我们评估了不同的表示大小,以找到表示的最有效维度。我们还评估了使用不同语音特征作为模型输入的效果。我们发现mel-frequency倒谱系数(MFCCs)单独使用或与韵律特征结合使用时表现最好。随后的用户研究结果证实了表征学习的好处。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信