{"title":"FACE RECOGNITION from NON-FRONTAL IMAGES Using DEEP NEURAL NETWORK","authors":"S. Chowdhury, J. Sil","doi":"10.1109/ICAPR.2017.8593160","DOIUrl":null,"url":null,"abstract":"Person recognition from pose-variant face images is a well addressed, yet challenging problem, especially for surveillance in a crowded place where the pose variation is large in the test set compare to the training set. Conventional feature extraction based face recognition techniques are not efficient enough to solve the problem. In this paper, a noble mechanism has been proposed to learn the training set consisting of few pose variant images and many frontal images of different persons using deep learning algorithms. At first, autoencoders are trained to build the templates for representing the pose variant training images. The left (45°) and right (+45°) templates cover all pose variations of test images from 90° to +90°. In the next step the convolution neural network (CNN) architectures are used in supervised mode for transforming the templates into person specific frontal images present in the training set. Left and right cluster of trained CNNs are obtained with respect to left and right templates. In the testing phase, the head-pose of the test image is estimated using collaborative representation based classifier (CRC) in order to select the appropriate cluster of CNN architectures for generation of the frontal image. The CNN architecture which provides the best match frontal image with the training set is recognized as the specific person. The matching score is measured using correlation coefficient and Frobenius norm. For a frontal test image if the matching score is below than the predefined threshold then the proposed method does not recognize the image. However, the training set has been updated by the unrecognized frontal test images for future recognition. The accuracy of the proposed method is around 99% when tested on CMU PIE database which is much higher in comparison to the existing face-recognition methods.","PeriodicalId":239965,"journal":{"name":"2017 Ninth International Conference on Advances in Pattern Recognition (ICAPR)","volume":"514 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Ninth International Conference on Advances in Pattern Recognition (ICAPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAPR.2017.8593160","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Person recognition from pose-variant face images is a well addressed, yet challenging problem, especially for surveillance in a crowded place where the pose variation is large in the test set compare to the training set. Conventional feature extraction based face recognition techniques are not efficient enough to solve the problem. In this paper, a noble mechanism has been proposed to learn the training set consisting of few pose variant images and many frontal images of different persons using deep learning algorithms. At first, autoencoders are trained to build the templates for representing the pose variant training images. The left (45°) and right (+45°) templates cover all pose variations of test images from 90° to +90°. In the next step the convolution neural network (CNN) architectures are used in supervised mode for transforming the templates into person specific frontal images present in the training set. Left and right cluster of trained CNNs are obtained with respect to left and right templates. In the testing phase, the head-pose of the test image is estimated using collaborative representation based classifier (CRC) in order to select the appropriate cluster of CNN architectures for generation of the frontal image. The CNN architecture which provides the best match frontal image with the training set is recognized as the specific person. The matching score is measured using correlation coefficient and Frobenius norm. For a frontal test image if the matching score is below than the predefined threshold then the proposed method does not recognize the image. However, the training set has been updated by the unrecognized frontal test images for future recognition. The accuracy of the proposed method is around 99% when tested on CMU PIE database which is much higher in comparison to the existing face-recognition methods.