基于相邻加权分量补丁的无监督人脸图像检索

R. Banaeeyan, M. H. Lye, M. Fauzi, H. A. Karim, John See
{"title":"基于相邻加权分量补丁的无监督人脸图像检索","authors":"R. Banaeeyan, M. H. Lye, M. Fauzi, H. A. Karim, John See","doi":"10.1109/ICIAS.2016.7824069","DOIUrl":null,"url":null,"abstract":"Face Image Retrieval (FIR) remains a challenging problem in many real word applications due to various pose and illumination alterations of face images. State-of-the-art systems attain good precision by utilizing Bag-of-Visual-Words (BoVW) retrieval model, but their average precision (AP) decline rapidly while retrieving face images, primarily because they disregard face-specific features, and generate low discriminative visual words, mainly at the quantization level. In this paper, we employ facial patch-based features to preserve more discriminative features at patch-level in order to achieve a higher precision. We take advantage of the TF-IDF voting scheme to give more weights to more discriminative facial features. First, features are extracted from facial components instead of the whole face which preserves more informative and person-specific features. Then, an adjacent patch-based comparison is performed to preserve more discriminative features at patch-level while scoring candidate face images. Finally, a weighting approach is implemented to give even more discrimination to different features from different face components. Experimental results on 1,000 face images from LFW (Labeled Faces in the Wild) indicate the superiority of proposed approach by means of higher mean average precision (mAP).","PeriodicalId":247287,"journal":{"name":"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Unsupervised face image retrieval using adjacent weighted component-based patches\",\"authors\":\"R. Banaeeyan, M. H. Lye, M. Fauzi, H. A. Karim, John See\",\"doi\":\"10.1109/ICIAS.2016.7824069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Face Image Retrieval (FIR) remains a challenging problem in many real word applications due to various pose and illumination alterations of face images. State-of-the-art systems attain good precision by utilizing Bag-of-Visual-Words (BoVW) retrieval model, but their average precision (AP) decline rapidly while retrieving face images, primarily because they disregard face-specific features, and generate low discriminative visual words, mainly at the quantization level. In this paper, we employ facial patch-based features to preserve more discriminative features at patch-level in order to achieve a higher precision. We take advantage of the TF-IDF voting scheme to give more weights to more discriminative facial features. First, features are extracted from facial components instead of the whole face which preserves more informative and person-specific features. Then, an adjacent patch-based comparison is performed to preserve more discriminative features at patch-level while scoring candidate face images. Finally, a weighting approach is implemented to give even more discrimination to different features from different face components. Experimental results on 1,000 face images from LFW (Labeled Faces in the Wild) indicate the superiority of proposed approach by means of higher mean average precision (mAP).\",\"PeriodicalId\":247287,\"journal\":{\"name\":\"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIAS.2016.7824069\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIAS.2016.7824069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

由于人脸图像姿态和光照的变化,人脸图像检索在现实应用中一直是一个具有挑战性的问题。现有系统利用视觉词袋(BoVW)检索模型获得了良好的精度,但在检索人脸图像时,其平均精度(AP)下降很快,这主要是因为它们忽略了人脸的特定特征,并且主要在量化水平上产生低判别性的视觉词。在本文中,我们采用基于人脸补丁的特征,在补丁级保留更多的判别特征,以达到更高的精度。我们利用TF-IDF投票方案为更具歧视性的面部特征赋予更多权重。首先,从面部成分中提取特征,而不是从整个面部提取特征,从而保留了更多的信息和个人特征。然后,在对候选人脸图像进行评分时,进行相邻的基于补丁的比较,以在补丁级保留更多的判别特征。最后,实现了一种加权方法,以更好地区分来自不同人脸成分的不同特征。实验结果表明,该方法具有较高的平均精度(mAP)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Unsupervised face image retrieval using adjacent weighted component-based patches
Face Image Retrieval (FIR) remains a challenging problem in many real word applications due to various pose and illumination alterations of face images. State-of-the-art systems attain good precision by utilizing Bag-of-Visual-Words (BoVW) retrieval model, but their average precision (AP) decline rapidly while retrieving face images, primarily because they disregard face-specific features, and generate low discriminative visual words, mainly at the quantization level. In this paper, we employ facial patch-based features to preserve more discriminative features at patch-level in order to achieve a higher precision. We take advantage of the TF-IDF voting scheme to give more weights to more discriminative facial features. First, features are extracted from facial components instead of the whole face which preserves more informative and person-specific features. Then, an adjacent patch-based comparison is performed to preserve more discriminative features at patch-level while scoring candidate face images. Finally, a weighting approach is implemented to give even more discrimination to different features from different face components. Experimental results on 1,000 face images from LFW (Labeled Faces in the Wild) indicate the superiority of proposed approach by means of higher mean average precision (mAP).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信