Automatic facial animation generation system of dancing characters considering emotion in dance and music

Wakana Asahina, N. Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, S. Morishima
{"title":"Automatic facial animation generation system of dancing characters considering emotion in dance and music","authors":"Wakana Asahina, N. Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2820926.2820935","DOIUrl":null,"url":null,"abstract":"In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call \"dance emotion\"). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose \"dance emotion model\" to visualize dancing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2015 Posters","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2820926.2820935","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call "dance emotion"). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose "dance emotion model" to visualize dancing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.
考虑舞蹈和音乐中情感的舞蹈人物面部动画自动生成系统
近年来,很多3D角色舞蹈动画电影都是由业余用户使用3DCG动画编辑工具(如MikuMikuDance)创作的。然而,大多数都是手动创建的。然后,自动面部动画系统的舞蹈角色将有助于制作舞蹈电影和有效地可视化印象。因此,我们解决了一个具有挑战性的主题,即评估舞蹈角色的情感(我们称之为“舞蹈情感”)。在之前考虑音乐特征的工作中,DiPaola等人[2006]提出了音乐驱动的情感表达面部系统。为了检测输入音乐的情绪,他们使用了一个层次框架(Thayer模型),并实现了生成与音乐情绪相匹配的面部动画。然而,由于输入的音乐使用高斯混合模型被划分为几个情绪,他们的模型无法表达两种情绪之间的微妙之处。此外,他们根据使用分数信息的心理规则来决定更详细的情绪,因此他们需要MIDI数据。本文提出了“舞蹈情感模型”,将舞蹈角色的情感形象化为面部表情。我们的模型是利用没有MIDI数据的音乐和舞蹈动作数据库,通过感知实验,在情感空间上逐帧地获取坐标信息来构建的。此外,通过考虑情感空间上的位移,我们不仅可以表达某种情感,还可以表达情感的微妙之处。结果表明,与以往的工作相比,我们的系统获得了更高的精度。我们可以通过输入音频数据和同步动作来快速创建面部表情结果。通过与图1中前面工作的比较,展示了该实用程序。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信