Assisting the autistic with improved facial expression recognition from mixed expressions

Washef Ahmed, S. Mitra, Kunal Chanda, Debasis Mazumdar
{"title":"Assisting the autistic with improved facial expression recognition from mixed expressions","authors":"Washef Ahmed, S. Mitra, Kunal Chanda, Debasis Mazumdar","doi":"10.1109/NCVPRIPG.2013.6776229","DOIUrl":null,"url":null,"abstract":"People suffering from autism have difficulty with recognizing other people's emotions and are therefore unable to react to it. Although there have been attempts aimed at developing a system for analyzing facial expressions for persons suffering from autism, very little has been explored for capturing one or more expressions from mixed expressions which are a mixture of two closely related expressions. This is essential for psychotherapeutic tool for analysis during counseling. This paper presents the idea of improving the recognition accuracy of one or more of the six prototypic expressions namely happiness, surprise, fear, disgust, sadness and anger from the mixture of two facial expressions. For this purpose a motion gradient based optical flow for muscle movement is computed between frames of a given video sequence. The computed optical flow is further used to generate feature vector as the signature of six basic prototypic expressions. Decision Tree generated rule base is used for clustering the feature vectors obtained in the video sequence and the result of clustering is used for recognition of expressions. The relative intensity of expressions for a given face present in a frame is measured. With the introduction of Component Based Analysis which is basically computing the feature vectors on the proposed regions of interest on a face, considerable improvement has been noticed regarding recognition of one or more expressions. The results have been validated against human judgement.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCVPRIPG.2013.6776229","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

People suffering from autism have difficulty with recognizing other people's emotions and are therefore unable to react to it. Although there have been attempts aimed at developing a system for analyzing facial expressions for persons suffering from autism, very little has been explored for capturing one or more expressions from mixed expressions which are a mixture of two closely related expressions. This is essential for psychotherapeutic tool for analysis during counseling. This paper presents the idea of improving the recognition accuracy of one or more of the six prototypic expressions namely happiness, surprise, fear, disgust, sadness and anger from the mixture of two facial expressions. For this purpose a motion gradient based optical flow for muscle movement is computed between frames of a given video sequence. The computed optical flow is further used to generate feature vector as the signature of six basic prototypic expressions. Decision Tree generated rule base is used for clustering the feature vectors obtained in the video sequence and the result of clustering is used for recognition of expressions. The relative intensity of expressions for a given face present in a frame is measured. With the introduction of Component Based Analysis which is basically computing the feature vectors on the proposed regions of interest on a face, considerable improvement has been noticed regarding recognition of one or more expressions. The results have been validated against human judgement.
帮助自闭症患者提高面部表情识别能力
患有自闭症的人很难识别他人的情绪,因此无法对他人的情绪做出反应。尽管有人试图开发一种系统来分析自闭症患者的面部表情,但很少有人探索从混合表情中捕捉一种或多种表情,混合表情是两种密切相关的表情的混合物。这是必不可少的心理治疗工具,分析在咨询过程中。本文提出了从两种面部表情的混合中提高快乐、惊讶、恐惧、厌恶、悲伤和愤怒六种原型表情中的一种或多种识别精度的想法。为此,在给定视频序列的帧之间计算基于运动梯度的肌肉运动光流。利用计算得到的光流生成特征向量作为6个基本原型表达式的签名。使用决策树生成的规则库对视频序列中获得的特征向量进行聚类,并将聚类结果用于表情识别。在一帧中测量给定面部表情的相对强度。随着基于分量的分析(Component Based Analysis)的引入,在人脸感兴趣的区域上计算特征向量,在识别一个或多个表情方面已经有了相当大的改进。这些结果已经与人类的判断相违背。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信