Towards the Separation of Rigid and Non-rigid Motions for Facial Expression Analysis

Georg Layher, Stephan Tschechne, R. Niese, A. Al-Hamadi, H. Neumann
{"title":"Towards the Separation of Rigid and Non-rigid Motions for Facial Expression Analysis","authors":"Georg Layher, Stephan Tschechne, R. Niese, A. Al-Hamadi, H. Neumann","doi":"10.1109/IE.2015.38","DOIUrl":null,"url":null,"abstract":"In intelligent environments, computer systems not solely serve as passive input devices waiting for user interaction but actively analyze their environment and adapt their behaviour according to changes in environmental parameters. One essential ability to achieve this goal is to analyze the mood, emotions and dispositions a user experiences while interacting with such intelligent systems. Features allowing to infer such parameters can be extracted from auditive, as well as visual sensory input streams. For the visual feature domain, in particular facial expressions are known to contain rich information about a user's emotional state and can be detected by using either static and/or dynamic image features. During interaction facial expressions are rarely performed in isolation, but most of the time co-occur with movements of the head. Thus, optical flow based facial features are often compromised by additional motions. Parts of the optical flow may be caused by rigid head motions, while other parts reflect deformations resulting from facial expressivity (non-rigid motions). In this work, we propose the first steps towards an optical flow based separation of rigid head motions from non-rigid motions caused by facial expressions. We suggest that after their separation, both, head movements and facial expressions can be used as a basis for the recognition of a user's emotions and dispositions and thus allow a technical system to effectively adapt to the user's state.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 International Conference on Intelligent Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IE.2015.38","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In intelligent environments, computer systems not solely serve as passive input devices waiting for user interaction but actively analyze their environment and adapt their behaviour according to changes in environmental parameters. One essential ability to achieve this goal is to analyze the mood, emotions and dispositions a user experiences while interacting with such intelligent systems. Features allowing to infer such parameters can be extracted from auditive, as well as visual sensory input streams. For the visual feature domain, in particular facial expressions are known to contain rich information about a user's emotional state and can be detected by using either static and/or dynamic image features. During interaction facial expressions are rarely performed in isolation, but most of the time co-occur with movements of the head. Thus, optical flow based facial features are often compromised by additional motions. Parts of the optical flow may be caused by rigid head motions, while other parts reflect deformations resulting from facial expressivity (non-rigid motions). In this work, we propose the first steps towards an optical flow based separation of rigid head motions from non-rigid motions caused by facial expressions. We suggest that after their separation, both, head movements and facial expressions can be used as a basis for the recognition of a user's emotions and dispositions and thus allow a technical system to effectively adapt to the user's state.
面部表情分析中刚性与非刚性动作的分离
在智能环境中,计算机系统不再仅仅作为被动输入设备等待用户交互,而是主动分析其所处的环境,并根据环境参数的变化调整其行为。实现这一目标的一项基本能力是在与此类智能系统交互时分析情绪、情绪和用户体验。允许推断这些参数的特征可以从听觉和视觉感官输入流中提取。对于视觉特征领域,特别是面部表情,已知包含关于用户情绪状态的丰富信息,可以通过使用静态和/或动态图像特征来检测。在互动过程中,面部表情很少单独出现,但大多数时候是与头部的运动同时出现的。因此,基于光流的面部特征经常受到额外运动的损害。光流的一部分可能是由头部的刚性运动引起的,而其他部分则反映了面部表情引起的变形(非刚性运动)。在这项工作中,我们提出了基于光流的头部刚性运动与面部表情引起的非刚性运动分离的第一步。我们建议将头部运动和面部表情分离后,可以作为识别用户情绪和倾向的基础,从而使技术系统能够有效地适应用户的状态。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信