Comparative Analysis of Emotion Detection from Facial Expressions and Voice Using Local Binary Patterns and Markov Models: Computer Vision and Facial Recognition

Kennedy Chengeta
{"title":"Comparative Analysis of Emotion Detection from Facial Expressions and Voice Using Local Binary Patterns and Markov Models: Computer Vision and Facial Recognition","authors":"Kennedy Chengeta","doi":"10.1145/3271553.3271574","DOIUrl":null,"url":null,"abstract":"Emotion detection has been achieved widely in facial and voice recognition separately with considerable success. The 6 emotional categories coming out of the classification include anger, fear, disgust, happiness and surprise. These can be infered from one's facial expressions both in the form of micro and macro expressions. In facial expressions the emotions are derived by feature extracting the facial expressions in different facial poses and classifying the expression feature vectors derived. Similarly automatic classification of a person's speech's affective state has also been used in signal processing to give insights into the nature of emotions. Speech being a critical tool for communication has been used to derive the emotional state of a human being. Different approaches have been successfully used to derive emotional states either in the form of facial expression recognition or speech emotional recognition being used. Less work has looked at fusing the two approaches to see if this improves emotional recognition accuracy. The study analyses the strengths of both and also limitations of either. The study reveals that emotional derivation based on facial expression recognition and acoustic information complement each other and a fusion of the two leads to better performance and results compared to the audio or acoustic recognition alone.","PeriodicalId":414782,"journal":{"name":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3271553.3271574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Emotion detection has been achieved widely in facial and voice recognition separately with considerable success. The 6 emotional categories coming out of the classification include anger, fear, disgust, happiness and surprise. These can be infered from one's facial expressions both in the form of micro and macro expressions. In facial expressions the emotions are derived by feature extracting the facial expressions in different facial poses and classifying the expression feature vectors derived. Similarly automatic classification of a person's speech's affective state has also been used in signal processing to give insights into the nature of emotions. Speech being a critical tool for communication has been used to derive the emotional state of a human being. Different approaches have been successfully used to derive emotional states either in the form of facial expression recognition or speech emotional recognition being used. Less work has looked at fusing the two approaches to see if this improves emotional recognition accuracy. The study analyses the strengths of both and also limitations of either. The study reveals that emotional derivation based on facial expression recognition and acoustic information complement each other and a fusion of the two leads to better performance and results compared to the audio or acoustic recognition alone.
基于局部二值模式和马尔可夫模型的面部表情和语音情感检测的比较分析:计算机视觉与面部识别
情感检测已经在人脸识别和语音识别领域得到了广泛的应用,并取得了相当大的成功。从分类中得出的6种情绪类别包括愤怒、恐惧、厌恶、快乐和惊讶。这些可以从一个人的面部表情中以微观和宏观两种形式推断出来。在面部表情中,对不同姿态下的面部表情进行特征提取,并对提取出来的面部表情特征向量进行分类。类似地,一个人的言语情感状态的自动分类也被用于信号处理,以深入了解情绪的本质。语言作为一种重要的交流工具,已经被用来推导一个人的情绪状态。不同的方法已经被成功地用于提取情绪状态,无论是面部表情识别的形式还是正在使用的语音情绪识别。很少有研究将这两种方法融合在一起,看看这是否能提高情绪识别的准确性。本研究分析了两者的优势以及各自的局限性。研究表明,基于面部表情识别的情感衍生与声音信息相辅相成,两者的融合比单独的音频或声音识别具有更好的性能和效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信