面部动画框架的网络和移动平台

E. Mendi, Coskun Bayrak
{"title":"面部动画框架的网络和移动平台","authors":"E. Mendi, Coskun Bayrak","doi":"10.1109/HEALTH.2011.6026785","DOIUrl":null,"url":null,"abstract":"In this paper, we present a realistic facial animation framework for web and mobile platforms. The proposed system converts the text into 3D face animation with synthetic voice, ensuring synchronization of the head and eye movements with emotions and word flow of a sentence. The expression tags embedded in the input sentences turn into given emotion on the face while the virtual face is speaking. The final face motion is obtained by interpolating the keyframes over time to generate transitions between facial expressions. Visual results of the animation are sufficient for web and mobile environments. The proposed system may contribute to the development of various new generation e-Health applications such as intelligent communication systems, human-machine interfaces and interfaces for handicapped people.","PeriodicalId":187103,"journal":{"name":"2011 IEEE 13th International Conference on e-Health Networking, Applications and Services","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Facial animation framework for web and mobile platforms\",\"authors\":\"E. Mendi, Coskun Bayrak\",\"doi\":\"10.1109/HEALTH.2011.6026785\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present a realistic facial animation framework for web and mobile platforms. The proposed system converts the text into 3D face animation with synthetic voice, ensuring synchronization of the head and eye movements with emotions and word flow of a sentence. The expression tags embedded in the input sentences turn into given emotion on the face while the virtual face is speaking. The final face motion is obtained by interpolating the keyframes over time to generate transitions between facial expressions. Visual results of the animation are sufficient for web and mobile environments. The proposed system may contribute to the development of various new generation e-Health applications such as intelligent communication systems, human-machine interfaces and interfaces for handicapped people.\",\"PeriodicalId\":187103,\"journal\":{\"name\":\"2011 IEEE 13th International Conference on e-Health Networking, Applications and Services\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-06-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE 13th International Conference on e-Health Networking, Applications and Services\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HEALTH.2011.6026785\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 13th International Conference on e-Health Networking, Applications and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HEALTH.2011.6026785","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

在本文中,我们提出了一个逼真的面部动画框架的网络和移动平台。该系统将文本转换为具有合成声音的3D面部动画,确保头部和眼睛的运动与情感和句子的单词流同步。当虚拟面孔说话时,嵌入在输入句子中的表情标签会变成给定的面部情绪。最终的面部运动是通过插值关键帧来产生面部表情之间的过渡来获得的。动画的视觉效果足以满足web和移动环境。该系统可促进智能通讯系统、人机界面及残障人士界面等新一代电子医疗应用的发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Facial animation framework for web and mobile platforms
In this paper, we present a realistic facial animation framework for web and mobile platforms. The proposed system converts the text into 3D face animation with synthetic voice, ensuring synchronization of the head and eye movements with emotions and word flow of a sentence. The expression tags embedded in the input sentences turn into given emotion on the face while the virtual face is speaking. The final face motion is obtained by interpolating the keyframes over time to generate transitions between facial expressions. Visual results of the animation are sufficient for web and mobile environments. The proposed system may contribute to the development of various new generation e-Health applications such as intelligent communication systems, human-machine interfaces and interfaces for handicapped people.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信