Emotion Recognition Based on Decoupling the Spatial Context from the Temporal Dynamics of Facial Expressions

R. Alazrai, K. M. Yousef, M. Daoud
{"title":"Emotion Recognition Based on Decoupling the Spatial Context from the Temporal Dynamics of Facial Expressions","authors":"R. Alazrai, K. M. Yousef, M. Daoud","doi":"10.1109/ISNCC.2019.8909141","DOIUrl":null,"url":null,"abstract":"This paper presents an emotion recognition approach based on decoupling the spatial context from the temporal dynamics of facial expressions in video sequences. In particular, each emotional state is represented as a set of temporal phases, where each temporal phase exhibits different temporal dynamics such as the expressing speed and the variable length of each phase. In this work, we have developed an algorithm for automatically detecting the temporal phases of human facial expressions by employing the concept of mutual information to define a similarity measure among different video frames. Moreover, we have developed a two-layer framework for emotional state recognition. The first layer utilizes the spatial context to classify the frames in an input video into emotional-specific temporal phases using a support vector machine classifier. In the second layer, dynamic time warping is used to classify the sequence of labels associated with the video frames, which is generated in the first layer, into a specific emotional state. In order to validate the performance of the proposed approach, we have conducted extensive computer simulations and the results show an average classification accuracy of 93.53% using the extended Cohn-Kanade facial-expression database.","PeriodicalId":187178,"journal":{"name":"2019 International Symposium on Networks, Computers and Communications (ISNCC)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Symposium on Networks, Computers and Communications (ISNCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISNCC.2019.8909141","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This paper presents an emotion recognition approach based on decoupling the spatial context from the temporal dynamics of facial expressions in video sequences. In particular, each emotional state is represented as a set of temporal phases, where each temporal phase exhibits different temporal dynamics such as the expressing speed and the variable length of each phase. In this work, we have developed an algorithm for automatically detecting the temporal phases of human facial expressions by employing the concept of mutual information to define a similarity measure among different video frames. Moreover, we have developed a two-layer framework for emotional state recognition. The first layer utilizes the spatial context to classify the frames in an input video into emotional-specific temporal phases using a support vector machine classifier. In the second layer, dynamic time warping is used to classify the sequence of labels associated with the video frames, which is generated in the first layer, into a specific emotional state. In order to validate the performance of the proposed approach, we have conducted extensive computer simulations and the results show an average classification accuracy of 93.53% using the extended Cohn-Kanade facial-expression database.
基于空间语境与面部表情时间动态解耦的情绪识别
本文提出了一种基于视频序列中面部表情的时空动态解耦的情感识别方法。特别是,每一种情绪状态都表现为一组时间阶段,其中每个时间阶段表现出不同的时间动态,如表达速度和每个阶段的可变长度。在这项工作中,我们开发了一种算法,通过使用互信息的概念来定义不同视频帧之间的相似性度量,自动检测人类面部表情的时间相位。此外,我们还开发了一个用于情绪状态识别的双层框架。第一层利用空间上下文,使用支持向量机分类器将输入视频中的帧分类为特定于情感的时间阶段。在第二层,动态时间扭曲用于将第一层生成的与视频帧相关的标签序列分类为特定的情感状态。为了验证所提出方法的性能,我们进行了大量的计算机模拟,结果显示使用扩展的科恩-卡纳德面部表情数据库的平均分类准确率为93.53%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信