{"title":"Nonlinear analysis of video images using deep recurrent auto-associative neural networks for facial understanding","authors":"S. M. Moghadam, S. Seyyedsalehi","doi":"10.1109/PRIA.2017.7983050","DOIUrl":null,"url":null,"abstract":"Preliminary experiments on the deep architectures of the Auto-Associative Neural Networks demonstrated that they have a fascinating ability in complex nonlinear feature extraction, manifold formation and dimension reduction. However, they should successfully pass a serious challenge of training. Furthermore, using the valuable information inclined in video sequences is so helpful in manifold formation and recognition tasks. Considering sequential information, the recurrent networks are widely used in dynamical modeling. This paper presents a novel nine-layer deep recurrent auto-associative neural network which is capable of simultaneously extracting three different information (identity, emotion and gender) from videos of the face. The proposed framework is extensively evaluated on extended Cohn-Kanade database in analyzing dynamical facial expression. The experimental results demonstrate that the recognition rates of emotion and gender are 95.35% and 97.42%, respectively which is comparable with other state-of-the-art.","PeriodicalId":336066,"journal":{"name":"2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"229 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PRIA.2017.7983050","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Preliminary experiments on the deep architectures of the Auto-Associative Neural Networks demonstrated that they have a fascinating ability in complex nonlinear feature extraction, manifold formation and dimension reduction. However, they should successfully pass a serious challenge of training. Furthermore, using the valuable information inclined in video sequences is so helpful in manifold formation and recognition tasks. Considering sequential information, the recurrent networks are widely used in dynamical modeling. This paper presents a novel nine-layer deep recurrent auto-associative neural network which is capable of simultaneously extracting three different information (identity, emotion and gender) from videos of the face. The proposed framework is extensively evaluated on extended Cohn-Kanade database in analyzing dynamical facial expression. The experimental results demonstrate that the recognition rates of emotion and gender are 95.35% and 97.42%, respectively which is comparable with other state-of-the-art.