Zhi-fang Liu, Zhifu You, A.K. Jain, Yun-qiong Wang
{"title":"彩色图像中的人脸检测与特征提取","authors":"Zhi-fang Liu, Zhifu You, A.K. Jain, Yun-qiong Wang","doi":"10.1109/ICCIMA.2003.1238112","DOIUrl":null,"url":null,"abstract":"Face detection and facial feature extraction plays an important role in video surveillance, human computer interaction and face recognition. Color is a useful piece of information in computer vision especially for skin detection. In this paper, we propose a novel approach for skin segmentation and facial feature extraction. The proposed skin segmentation is a method for integrating the chrominance components of nonlinear YCrCb color model. The goal of skin detection is to group pixels to form possible face candidate regions and then use connected components analysis for pixels grouping. In order to detect the facial feature in scale invariant, the possible face candidate regions will be normalized, and then texture information in these regions will be segmented by means of mean and variance of face region. Edge will be detected using the method based on multi-scale morphological. Eye will be located by the PCA edge direction. The others feature, such as nose and mouth, also located using the geometrical shape information. As all the above-mentioned techniques are simple and efficient, the proposed method is computational effective and suitable for practical applications. In our experiments, the proposed method has been successfully evaluated using two different test datasets. The detection accuracy is around 98%, the average run time ranged from 0.1-0.3 sec per frame.","PeriodicalId":385362,"journal":{"name":"Proceedings Fifth International Conference on Computational Intelligence and Multimedia Applications. ICCIMA 2003","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"34","resultStr":"{\"title\":\"Face detection and facial feature extraction in color image\",\"authors\":\"Zhi-fang Liu, Zhifu You, A.K. Jain, Yun-qiong Wang\",\"doi\":\"10.1109/ICCIMA.2003.1238112\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Face detection and facial feature extraction plays an important role in video surveillance, human computer interaction and face recognition. Color is a useful piece of information in computer vision especially for skin detection. In this paper, we propose a novel approach for skin segmentation and facial feature extraction. The proposed skin segmentation is a method for integrating the chrominance components of nonlinear YCrCb color model. The goal of skin detection is to group pixels to form possible face candidate regions and then use connected components analysis for pixels grouping. In order to detect the facial feature in scale invariant, the possible face candidate regions will be normalized, and then texture information in these regions will be segmented by means of mean and variance of face region. Edge will be detected using the method based on multi-scale morphological. Eye will be located by the PCA edge direction. The others feature, such as nose and mouth, also located using the geometrical shape information. As all the above-mentioned techniques are simple and efficient, the proposed method is computational effective and suitable for practical applications. In our experiments, the proposed method has been successfully evaluated using two different test datasets. The detection accuracy is around 98%, the average run time ranged from 0.1-0.3 sec per frame.\",\"PeriodicalId\":385362,\"journal\":{\"name\":\"Proceedings Fifth International Conference on Computational Intelligence and Multimedia Applications. ICCIMA 2003\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2003-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"34\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Fifth International Conference on Computational Intelligence and Multimedia Applications. ICCIMA 2003\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCIMA.2003.1238112\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Fifth International Conference on Computational Intelligence and Multimedia Applications. ICCIMA 2003","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCIMA.2003.1238112","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Face detection and facial feature extraction in color image
Face detection and facial feature extraction plays an important role in video surveillance, human computer interaction and face recognition. Color is a useful piece of information in computer vision especially for skin detection. In this paper, we propose a novel approach for skin segmentation and facial feature extraction. The proposed skin segmentation is a method for integrating the chrominance components of nonlinear YCrCb color model. The goal of skin detection is to group pixels to form possible face candidate regions and then use connected components analysis for pixels grouping. In order to detect the facial feature in scale invariant, the possible face candidate regions will be normalized, and then texture information in these regions will be segmented by means of mean and variance of face region. Edge will be detected using the method based on multi-scale morphological. Eye will be located by the PCA edge direction. The others feature, such as nose and mouth, also located using the geometrical shape information. As all the above-mentioned techniques are simple and efficient, the proposed method is computational effective and suitable for practical applications. In our experiments, the proposed method has been successfully evaluated using two different test datasets. The detection accuracy is around 98%, the average run time ranged from 0.1-0.3 sec per frame.