{"title":"Efficient Face And Gesture Recognition For Time Sensitive Application","authors":"Anush Ananthakumar","doi":"10.1109/SSIAI.2018.8470351","DOIUrl":null,"url":null,"abstract":"Face recognition systems are used in various fields such as biometric authentication, security enhancement, automobile control and user detection. This research is focused on developing a model to control a system using gestures, while simultaneously implementing continuous facial recognition to avoid unauthorized access. An effective face recognition system is developed and applied in conjunction with a gesture recognition system to control a wireless robot in real-time. The facial recognition system extracts the face using the Viola-Jones algorithm which utilizes Haar like features along with Adaboost training. This is followed by a Convolution Neural Network (CNN) based feature extractor and Support Vector Machine (SVM) to recognize the face. The gesture recognition is facilitated by using color segmentation, which involves extracting the skin tone of the detected face and using this to detect the position of hand. The gesture is obtained by tracking the hand using the Kanade-Lucas-Tomasi (KLT) algorithm. In this research, we additionally utilize a background subtraction model so as to extract the foreground and reduce the misclassifications. Such a technique highly improves the performance of the facial and gesture detector in complex and cluttered environments. The performance of the face detector was tested on different databases including the ORL, Caltech and Faces96 database. The efficacy of this system in controlling a robot in real-time has also been demonstrated in this research. It provides an accuracy of 94.44% for recognizing faces and greater than 90.8% for recognizing gestures in real-time applications. Such a system is seen to have superior performance coupled with a relatively lower computation requirement in comparison to existing techniques.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSIAI.2018.8470351","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Face recognition systems are used in various fields such as biometric authentication, security enhancement, automobile control and user detection. This research is focused on developing a model to control a system using gestures, while simultaneously implementing continuous facial recognition to avoid unauthorized access. An effective face recognition system is developed and applied in conjunction with a gesture recognition system to control a wireless robot in real-time. The facial recognition system extracts the face using the Viola-Jones algorithm which utilizes Haar like features along with Adaboost training. This is followed by a Convolution Neural Network (CNN) based feature extractor and Support Vector Machine (SVM) to recognize the face. The gesture recognition is facilitated by using color segmentation, which involves extracting the skin tone of the detected face and using this to detect the position of hand. The gesture is obtained by tracking the hand using the Kanade-Lucas-Tomasi (KLT) algorithm. In this research, we additionally utilize a background subtraction model so as to extract the foreground and reduce the misclassifications. Such a technique highly improves the performance of the facial and gesture detector in complex and cluttered environments. The performance of the face detector was tested on different databases including the ORL, Caltech and Faces96 database. The efficacy of this system in controlling a robot in real-time has also been demonstrated in this research. It provides an accuracy of 94.44% for recognizing faces and greater than 90.8% for recognizing gestures in real-time applications. Such a system is seen to have superior performance coupled with a relatively lower computation requirement in comparison to existing techniques.