Generation of emotional feature space for facial expression recognition using self-mapping

M. Ishii, Toshio Shimodate, Y. Kageyama, Tsuyoshi Takahashi, M. Nishida
{"title":"Generation of emotional feature space for facial expression recognition using self-mapping","authors":"M. Ishii, Toshio Shimodate, Y. Kageyama, Tsuyoshi Takahashi, M. Nishida","doi":"10.5772/9169","DOIUrl":null,"url":null,"abstract":"This paper proposes a method for generating a subject-specific emotional feature space that expresses the correspondence between the changes in facial expression patterns and the degree of emotions. The feature space is generated using self-organizing maps and counter propagation networks. The training data input method and the number of dimensions of the CPN mapping space are investigated. The results clearly show that the input ratio of the training data should be constant for every emotion category and the number of dimensions of the CPN mapping space should be extended to effectively express a level of detailed emotion.","PeriodicalId":411966,"journal":{"name":"2012 Proceedings of SICE Annual Conference (SICE)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Proceedings of SICE Annual Conference (SICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5772/9169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

This paper proposes a method for generating a subject-specific emotional feature space that expresses the correspondence between the changes in facial expression patterns and the degree of emotions. The feature space is generated using self-organizing maps and counter propagation networks. The training data input method and the number of dimensions of the CPN mapping space are investigated. The results clearly show that the input ratio of the training data should be constant for every emotion category and the number of dimensions of the CPN mapping space should be extended to effectively express a level of detailed emotion.
基于自映射的面部表情识别情感特征空间生成
本文提出了一种生成特定对象的情感特征空间的方法,该空间表达了面部表情模式变化与情感程度之间的对应关系。使用自组织映射和反传播网络生成特征空间。研究了训练数据的输入方法和CPN映射空间的维数。结果清楚地表明,对于每个情感类别,训练数据的输入比例应该是恒定的,并且需要扩展CPN映射空间的维数,以有效地表达一个层次的详细情感。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信