R. Banaeeyan, M. H. Lye, M. Fauzi, H. A. Karim, John See
{"title":"基于相邻加权分量补丁的无监督人脸图像检索","authors":"R. Banaeeyan, M. H. Lye, M. Fauzi, H. A. Karim, John See","doi":"10.1109/ICIAS.2016.7824069","DOIUrl":null,"url":null,"abstract":"Face Image Retrieval (FIR) remains a challenging problem in many real word applications due to various pose and illumination alterations of face images. State-of-the-art systems attain good precision by utilizing Bag-of-Visual-Words (BoVW) retrieval model, but their average precision (AP) decline rapidly while retrieving face images, primarily because they disregard face-specific features, and generate low discriminative visual words, mainly at the quantization level. In this paper, we employ facial patch-based features to preserve more discriminative features at patch-level in order to achieve a higher precision. We take advantage of the TF-IDF voting scheme to give more weights to more discriminative facial features. First, features are extracted from facial components instead of the whole face which preserves more informative and person-specific features. Then, an adjacent patch-based comparison is performed to preserve more discriminative features at patch-level while scoring candidate face images. Finally, a weighting approach is implemented to give even more discrimination to different features from different face components. Experimental results on 1,000 face images from LFW (Labeled Faces in the Wild) indicate the superiority of proposed approach by means of higher mean average precision (mAP).","PeriodicalId":247287,"journal":{"name":"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Unsupervised face image retrieval using adjacent weighted component-based patches\",\"authors\":\"R. Banaeeyan, M. H. Lye, M. Fauzi, H. A. Karim, John See\",\"doi\":\"10.1109/ICIAS.2016.7824069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Face Image Retrieval (FIR) remains a challenging problem in many real word applications due to various pose and illumination alterations of face images. State-of-the-art systems attain good precision by utilizing Bag-of-Visual-Words (BoVW) retrieval model, but their average precision (AP) decline rapidly while retrieving face images, primarily because they disregard face-specific features, and generate low discriminative visual words, mainly at the quantization level. In this paper, we employ facial patch-based features to preserve more discriminative features at patch-level in order to achieve a higher precision. We take advantage of the TF-IDF voting scheme to give more weights to more discriminative facial features. First, features are extracted from facial components instead of the whole face which preserves more informative and person-specific features. Then, an adjacent patch-based comparison is performed to preserve more discriminative features at patch-level while scoring candidate face images. Finally, a weighting approach is implemented to give even more discrimination to different features from different face components. Experimental results on 1,000 face images from LFW (Labeled Faces in the Wild) indicate the superiority of proposed approach by means of higher mean average precision (mAP).\",\"PeriodicalId\":247287,\"journal\":{\"name\":\"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIAS.2016.7824069\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIAS.2016.7824069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Unsupervised face image retrieval using adjacent weighted component-based patches
Face Image Retrieval (FIR) remains a challenging problem in many real word applications due to various pose and illumination alterations of face images. State-of-the-art systems attain good precision by utilizing Bag-of-Visual-Words (BoVW) retrieval model, but their average precision (AP) decline rapidly while retrieving face images, primarily because they disregard face-specific features, and generate low discriminative visual words, mainly at the quantization level. In this paper, we employ facial patch-based features to preserve more discriminative features at patch-level in order to achieve a higher precision. We take advantage of the TF-IDF voting scheme to give more weights to more discriminative facial features. First, features are extracted from facial components instead of the whole face which preserves more informative and person-specific features. Then, an adjacent patch-based comparison is performed to preserve more discriminative features at patch-level while scoring candidate face images. Finally, a weighting approach is implemented to give even more discrimination to different features from different face components. Experimental results on 1,000 face images from LFW (Labeled Faces in the Wild) indicate the superiority of proposed approach by means of higher mean average precision (mAP).