{"title":"Face Recognition Using Modified Histogram of Oriented Gradients and Convolutional Neural Networks","authors":"Raveendra K","doi":"10.5815/ijigsp.2023.05.05","DOIUrl":null,"url":null,"abstract":"We are aiming in this work to develop an improved face recognition system for person-dependent and person-independent variants. To extract relevant facial features, we are using the convolutional neural network. These features allow comparing faces of different subjects in an optimized manner. The system training module firstly recognizes different subjects of dataset, in another approach, the module processes a different set of new images. Use of CNN alone for face recognition has achieved promising recognition rate, however many other works have showed declined in recognition rate for many complex datasets. Further, use of CNN alone exhibits reduced recognition rate for large scale databases. To overcome the above problem, we are proposing a modified spatial texture pattern extraction technique namely modified Histogram oriented gradient (m-HOG) for extracting facial image features along three gradient directions along with CNN algorithm to classify the face image based on the features. In the preprocessing stage, the face region is captured by removing the background from the input face images and is resized to 100×100. The m-HOG features are retrieved using histogram channels evenly distributed between 0 and 180 degrees. The obtained features are resized as a matrix having dimension 66×198 and which are passed to the CNN to extract robust and discriminative features and are classified using softmax classification layer. The recognition rates obtained for L-Spacek, NIR, JAFFE and YALE database are 99.80%, 91.43%, 95.00% and 93.33% respectively and are found to be better when compared to the existing methods.","PeriodicalId":378340,"journal":{"name":"International Journal of Image, Graphics and Signal Processing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image, Graphics and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5815/ijigsp.2023.05.05","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We are aiming in this work to develop an improved face recognition system for person-dependent and person-independent variants. To extract relevant facial features, we are using the convolutional neural network. These features allow comparing faces of different subjects in an optimized manner. The system training module firstly recognizes different subjects of dataset, in another approach, the module processes a different set of new images. Use of CNN alone for face recognition has achieved promising recognition rate, however many other works have showed declined in recognition rate for many complex datasets. Further, use of CNN alone exhibits reduced recognition rate for large scale databases. To overcome the above problem, we are proposing a modified spatial texture pattern extraction technique namely modified Histogram oriented gradient (m-HOG) for extracting facial image features along three gradient directions along with CNN algorithm to classify the face image based on the features. In the preprocessing stage, the face region is captured by removing the background from the input face images and is resized to 100×100. The m-HOG features are retrieved using histogram channels evenly distributed between 0 and 180 degrees. The obtained features are resized as a matrix having dimension 66×198 and which are passed to the CNN to extract robust and discriminative features and are classified using softmax classification layer. The recognition rates obtained for L-Spacek, NIR, JAFFE and YALE database are 99.80%, 91.43%, 95.00% and 93.33% respectively and are found to be better when compared to the existing methods.