基于参考条件低秩投影增强遮挡人脸识别图像表征

Shibashish Sen, Manikandan Ravikiran
{"title":"基于参考条件低秩投影增强遮挡人脸识别图像表征","authors":"Shibashish Sen, Manikandan Ravikiran","doi":"10.1109/AIPR47015.2019.9174567","DOIUrl":null,"url":null,"abstract":"Deep learning in face recognition is widely explored in recent times due to its ability to produce state-of-the-art results and availability of large public datasets. While recent deep learning approaches involving margin loss based image representations produce 99% accuracy across benchmarks, none of these studies focus explicitly on occluded face verification. Further, in real world scenarios, there is a need for efficient methods that cater to the cases of occlusion of faces with hats, scarves, goggle or sometimes exaggerated facial expression. Moreover, with face verification gathering traction in mainstream real-time embedded applications of surveillance, the proposed approaches need to be highly accurate. In this paper, we revisit the same through a large-scale study involving multiple synthetically created goggle-occluded face datasets using multiple state-of-the-art face representations. Through this study, we identify that occlusion in faces results in non-isotropic face representations in feature space which results in a drop in performance. Therefore, we propose an approach to enhance existing face representations by learning reference conditioned Low-Rank projections (RCLP), which can create isotropic representations thereby improving face recognition. We benchmark the developed approach over synthetically goggled versions of LFW, CFP-FP, ATT, FEI, Georgia Tech and Essex University face databases with representations from ResNet-ArcFace, VGGFace, MobilefaceNet-ArcFace LightCNN resulting in a total of 100 + experiments where we achieve improvements in the accuracy-rate across all with a maximum of 4% on FEI dataset. Finally, to validate the approach in a realistic scenario, we additionally present results over our internal face verification dataset of 1k images and confirm that the proposed approach only shows positive results without degrading existing baseline performance.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Image Representations for Occluded Face Recognition via Reference Conditioned Low-Rank projection\",\"authors\":\"Shibashish Sen, Manikandan Ravikiran\",\"doi\":\"10.1109/AIPR47015.2019.9174567\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning in face recognition is widely explored in recent times due to its ability to produce state-of-the-art results and availability of large public datasets. While recent deep learning approaches involving margin loss based image representations produce 99% accuracy across benchmarks, none of these studies focus explicitly on occluded face verification. Further, in real world scenarios, there is a need for efficient methods that cater to the cases of occlusion of faces with hats, scarves, goggle or sometimes exaggerated facial expression. Moreover, with face verification gathering traction in mainstream real-time embedded applications of surveillance, the proposed approaches need to be highly accurate. In this paper, we revisit the same through a large-scale study involving multiple synthetically created goggle-occluded face datasets using multiple state-of-the-art face representations. Through this study, we identify that occlusion in faces results in non-isotropic face representations in feature space which results in a drop in performance. Therefore, we propose an approach to enhance existing face representations by learning reference conditioned Low-Rank projections (RCLP), which can create isotropic representations thereby improving face recognition. We benchmark the developed approach over synthetically goggled versions of LFW, CFP-FP, ATT, FEI, Georgia Tech and Essex University face databases with representations from ResNet-ArcFace, VGGFace, MobilefaceNet-ArcFace LightCNN resulting in a total of 100 + experiments where we achieve improvements in the accuracy-rate across all with a maximum of 4% on FEI dataset. Finally, to validate the approach in a realistic scenario, we additionally present results over our internal face verification dataset of 1k images and confirm that the proposed approach only shows positive results without degrading existing baseline performance.\",\"PeriodicalId\":167075,\"journal\":{\"name\":\"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"volume\":\"71 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/AIPR47015.2019.9174567\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR47015.2019.9174567","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于能够产生最先进的结果和大型公共数据集的可用性,近年来深度学习在人脸识别方面得到了广泛的探索。虽然最近涉及基于边际损失的图像表示的深度学习方法在基准测试中产生了99%的准确率,但这些研究都没有明确关注遮挡面部验证。此外,在现实世界的场景中,需要有效的方法来满足戴着帽子、围巾、护目镜或有时夸张的面部表情遮挡面部的情况。此外,随着人脸验证采集在主流实时嵌入式监控应用中的应用,所提出的方法需要具有很高的准确性。在本文中,我们通过一项大规模研究重新审视了这一问题,该研究涉及多个综合创建的护目镜遮挡的人脸数据集,使用多种最先进的人脸表示。通过本研究,我们发现人脸遮挡导致特征空间中的非各向同性人脸表示,从而导致性能下降。因此,我们提出了一种通过学习参考条件低秩投影(RCLP)来增强现有人脸表征的方法,该方法可以创建各向同性表征,从而提高人脸识别。我们在LFW, CFP-FP, ATT, FEI,佐治亚理工学院和埃塞克斯大学人脸数据库的综合版本上对开发的方法进行基准测试,并使用ResNet-ArcFace, VGGFace, MobilefaceNet-ArcFace LightCNN的表示进行了总共100多个实验,我们在FEI数据集上实现了准确率的提高,最高可达4%。最后,为了在现实场景中验证该方法,我们还在1k图像的内部人脸验证数据集上展示了结果,并确认所提出的方法只显示了积极的结果,而不会降低现有的基线性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Enhancing Image Representations for Occluded Face Recognition via Reference Conditioned Low-Rank projection
Deep learning in face recognition is widely explored in recent times due to its ability to produce state-of-the-art results and availability of large public datasets. While recent deep learning approaches involving margin loss based image representations produce 99% accuracy across benchmarks, none of these studies focus explicitly on occluded face verification. Further, in real world scenarios, there is a need for efficient methods that cater to the cases of occlusion of faces with hats, scarves, goggle or sometimes exaggerated facial expression. Moreover, with face verification gathering traction in mainstream real-time embedded applications of surveillance, the proposed approaches need to be highly accurate. In this paper, we revisit the same through a large-scale study involving multiple synthetically created goggle-occluded face datasets using multiple state-of-the-art face representations. Through this study, we identify that occlusion in faces results in non-isotropic face representations in feature space which results in a drop in performance. Therefore, we propose an approach to enhance existing face representations by learning reference conditioned Low-Rank projections (RCLP), which can create isotropic representations thereby improving face recognition. We benchmark the developed approach over synthetically goggled versions of LFW, CFP-FP, ATT, FEI, Georgia Tech and Essex University face databases with representations from ResNet-ArcFace, VGGFace, MobilefaceNet-ArcFace LightCNN resulting in a total of 100 + experiments where we achieve improvements in the accuracy-rate across all with a maximum of 4% on FEI dataset. Finally, to validate the approach in a realistic scenario, we additionally present results over our internal face verification dataset of 1k images and confirm that the proposed approach only shows positive results without degrading existing baseline performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信