M. Ishii, Toshio Shimodate, Y. Kageyama, Tsuyoshi Takahashi, M. Nishida
{"title":"Generation of emotional feature space for facial expression recognition using self-mapping","authors":"M. Ishii, Toshio Shimodate, Y. Kageyama, Tsuyoshi Takahashi, M. Nishida","doi":"10.5772/9169","DOIUrl":null,"url":null,"abstract":"This paper proposes a method for generating a subject-specific emotional feature space that expresses the correspondence between the changes in facial expression patterns and the degree of emotions. The feature space is generated using self-organizing maps and counter propagation networks. The training data input method and the number of dimensions of the CPN mapping space are investigated. The results clearly show that the input ratio of the training data should be constant for every emotion category and the number of dimensions of the CPN mapping space should be extended to effectively express a level of detailed emotion.","PeriodicalId":411966,"journal":{"name":"2012 Proceedings of SICE Annual Conference (SICE)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Proceedings of SICE Annual Conference (SICE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5772/9169","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper proposes a method for generating a subject-specific emotional feature space that expresses the correspondence between the changes in facial expression patterns and the degree of emotions. The feature space is generated using self-organizing maps and counter propagation networks. The training data input method and the number of dimensions of the CPN mapping space are investigated. The results clearly show that the input ratio of the training data should be constant for every emotion category and the number of dimensions of the CPN mapping space should be extended to effectively express a level of detailed emotion.