利用面部特征点跟踪合成表情:情感是如何传达的

AFFINE '10 Pub Date : 2010-10-29 DOI:10.1145/1877826.1877835
T. Baltrušaitis, L. Riek, P. Robinson
{"title":"利用面部特征点跟踪合成表情:情感是如何传达的","authors":"T. Baltrušaitis, L. Riek, P. Robinson","doi":"10.1145/1877826.1877835","DOIUrl":null,"url":null,"abstract":"Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task. The experiment compared mental state recognition rates of participants who viewed videos of human actors and synthetic characters (physical android robot, virtual avatar, and virtual stick figure drawings) enacting various facial expressions. All expressions made by the synthetic characters were automatically generated using the 22 tracked facial feature points on the videos of the human actors. Our results show no difference in accuracy across the three synthetic representations, however, all three were less accurate than the original human actor videos that generated them. Overall, facial expressions showing surprise were more easily identifiable than other mental states, suggesting that a geometric approach to synthesis may be better suited toward some mental states than others.","PeriodicalId":433717,"journal":{"name":"AFFINE '10","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Synthesizing expressions using facial feature point tracking: how emotion is conveyed\",\"authors\":\"T. Baltrušaitis, L. Riek, P. Robinson\",\"doi\":\"10.1145/1877826.1877835\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task. The experiment compared mental state recognition rates of participants who viewed videos of human actors and synthetic characters (physical android robot, virtual avatar, and virtual stick figure drawings) enacting various facial expressions. All expressions made by the synthetic characters were automatically generated using the 22 tracked facial feature points on the videos of the human actors. Our results show no difference in accuracy across the three synthetic representations, however, all three were less accurate than the original human actor videos that generated them. Overall, facial expressions showing surprise were more easily identifiable than other mental states, suggesting that a geometric approach to synthesis may be better suited toward some mental states than others.\",\"PeriodicalId\":433717,\"journal\":{\"name\":\"AFFINE '10\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-10-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AFFINE '10\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1877826.1877835\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AFFINE '10","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1877826.1877835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

摘要

许多面部表情的分析和合成方法都依赖于对人脸标记点的自动跟踪。然而,选择这种方法通常是因为易于跟踪,而不是它传达影响的能力。我们进行了一项实验,评估了22个这样的自动跟踪特征点在心理状态识别任务中的感知重要性。该实验比较了观看真人演员和合成人物(实体机器人、虚拟化身和虚拟简笔画)表演各种面部表情的视频的参与者的心理状态识别率。合成角色的所有表情都是利用人类演员视频中跟踪的22个面部特征点自动生成的。我们的结果显示,三种合成表示的准确性没有差异,然而,这三种合成表示的准确性都低于生成它们的原始人类演员视频。总的来说,表现惊讶的面部表情比其他心理状态更容易识别,这表明几何合成方法可能比其他心理状态更适合于某些心理状态。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Synthesizing expressions using facial feature point tracking: how emotion is conveyed
Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task. The experiment compared mental state recognition rates of participants who viewed videos of human actors and synthetic characters (physical android robot, virtual avatar, and virtual stick figure drawings) enacting various facial expressions. All expressions made by the synthetic characters were automatically generated using the 22 tracked facial feature points on the videos of the human actors. Our results show no difference in accuracy across the three synthetic representations, however, all three were less accurate than the original human actor videos that generated them. Overall, facial expressions showing surprise were more easily identifiable than other mental states, suggesting that a geometric approach to synthesis may be better suited toward some mental states than others.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信