利用耦合条件生成对抗网络对野外侧面人脸进行识别

IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
IET Biometrics Pub Date : 2022-03-10 DOI:10.1049/bme2.12069
Fariborz Taherkhani, Veeru Talreja, Jeremy Dawson, Matthew C. Valenti, Nasser M. Nasrabadi
{"title":"利用耦合条件生成对抗网络对野外侧面人脸进行识别","authors":"Fariborz Taherkhani,&nbsp;Veeru Talreja,&nbsp;Jeremy Dawson,&nbsp;Matthew C. Valenti,&nbsp;Nasser M. Nasrabadi","doi":"10.1049/bme2.12069","DOIUrl":null,"url":null,"abstract":"<p>In recent years, with the advent of deep-learning, face recognition (FR) has achieved exceptional success. However, many of these deep FR models perform much better in handling frontal faces compared to profile faces. The major reason for poor performance in handling of profile faces is that it is inherently difficult to learn pose-invariant deep representations that are useful for profile FR. In this paper, the authors hypothesise that the profile face domain possesses a latent connection with the frontal face domain in a latent feature subspace. The authors look to exploit this latent connection by projecting the profile faces and frontal faces into a common latent subspace and perform verification or retrieval in the latent domain. A coupled conditional generative adversarial network (cpGAN) structure is leveraged to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cpGAN framework consists of two conditional GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximises the pair-wise correlation between the two feature domains in a common embedding feature subspace. The efficacy of the authors’ approach compared with the state of the art is demonstrated using the CFP, CMU Multi-PIE, IARPA Janus Benchmark A, and IARPA Janus Benchmark C datasets. Additionally, the authors have also implemented a coupled convolutional neural network (cpCNN) and an adversarial discriminative domain adaptation network (ADDA) for profile to frontal FR. The authors have evaluated the performance of cpCNN and ADDA and compared it with the proposed cpGAN. Finally, the authors have also evaluated the authors’ cpGAN for reconstruction of frontal faces from input profile faces contained in the VGGFace2 dataset.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"260-276"},"PeriodicalIF":1.8000,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12069","citationCount":"2","resultStr":"{\"title\":\"Profile to frontal face recognition in the wild using coupled conditional generative adversarial network\",\"authors\":\"Fariborz Taherkhani,&nbsp;Veeru Talreja,&nbsp;Jeremy Dawson,&nbsp;Matthew C. Valenti,&nbsp;Nasser M. Nasrabadi\",\"doi\":\"10.1049/bme2.12069\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In recent years, with the advent of deep-learning, face recognition (FR) has achieved exceptional success. However, many of these deep FR models perform much better in handling frontal faces compared to profile faces. The major reason for poor performance in handling of profile faces is that it is inherently difficult to learn pose-invariant deep representations that are useful for profile FR. In this paper, the authors hypothesise that the profile face domain possesses a latent connection with the frontal face domain in a latent feature subspace. The authors look to exploit this latent connection by projecting the profile faces and frontal faces into a common latent subspace and perform verification or retrieval in the latent domain. A coupled conditional generative adversarial network (cpGAN) structure is leveraged to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cpGAN framework consists of two conditional GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximises the pair-wise correlation between the two feature domains in a common embedding feature subspace. The efficacy of the authors’ approach compared with the state of the art is demonstrated using the CFP, CMU Multi-PIE, IARPA Janus Benchmark A, and IARPA Janus Benchmark C datasets. Additionally, the authors have also implemented a coupled convolutional neural network (cpCNN) and an adversarial discriminative domain adaptation network (ADDA) for profile to frontal FR. The authors have evaluated the performance of cpCNN and ADDA and compared it with the proposed cpGAN. Finally, the authors have also evaluated the authors’ cpGAN for reconstruction of frontal faces from input profile faces contained in the VGGFace2 dataset.</p>\",\"PeriodicalId\":48821,\"journal\":{\"name\":\"IET Biometrics\",\"volume\":\"11 3\",\"pages\":\"260-276\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2022-03-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12069\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Biometrics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/bme2.12069\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Biometrics","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/bme2.12069","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 2

摘要

近年来,随着深度学习的出现,人脸识别(FR)取得了非凡的成功。然而,与侧面人脸相比,许多深度FR模型在处理正面人脸方面表现得更好。轮廓面处理性能差的主要原因是很难学习对轮廓FR有用的位姿不变深度表示。在本文中,作者假设轮廓面域在潜在特征子空间中与正面面域具有潜在联系。作者希望通过将侧面脸和正面脸投射到一个共同的潜在子空间中并在潜在域中进行验证或检索来利用这种潜在联系。利用耦合条件生成对抗网络(cpGAN)结构在潜在公共嵌入子空间中寻找侧面和正面图像之间的隐藏关系。具体来说,cpGAN框架由两个基于条件gan的子网络组成,一个专用于正面域,另一个专用于轮廓域。每个子网络倾向于在公共嵌入特征子空间中找到最大两个特征域之间成对相关性的投影。通过使用CFP、CMU Multi-PIE、IARPA Janus Benchmark A和IARPA Janus Benchmark C数据集,证明了作者方法与最新技术相比的有效性。此外,作者还实现了一个耦合卷积神经网络(cpCNN)和一个对抗性判别域自适应网络(ADDA),用于剖面到正面FR。作者评估了cpCNN和ADDA的性能,并将其与所提出的cpGAN进行了比较。最后,作者还评估了作者的cpGAN从VGGFace2数据集中包含的输入轮廓面重建正面面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Profile to frontal face recognition in the wild using coupled conditional generative adversarial network

Profile to frontal face recognition in the wild using coupled conditional generative adversarial network

In recent years, with the advent of deep-learning, face recognition (FR) has achieved exceptional success. However, many of these deep FR models perform much better in handling frontal faces compared to profile faces. The major reason for poor performance in handling of profile faces is that it is inherently difficult to learn pose-invariant deep representations that are useful for profile FR. In this paper, the authors hypothesise that the profile face domain possesses a latent connection with the frontal face domain in a latent feature subspace. The authors look to exploit this latent connection by projecting the profile faces and frontal faces into a common latent subspace and perform verification or retrieval in the latent domain. A coupled conditional generative adversarial network (cpGAN) structure is leveraged to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cpGAN framework consists of two conditional GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximises the pair-wise correlation between the two feature domains in a common embedding feature subspace. The efficacy of the authors’ approach compared with the state of the art is demonstrated using the CFP, CMU Multi-PIE, IARPA Janus Benchmark A, and IARPA Janus Benchmark C datasets. Additionally, the authors have also implemented a coupled convolutional neural network (cpCNN) and an adversarial discriminative domain adaptation network (ADDA) for profile to frontal FR. The authors have evaluated the performance of cpCNN and ADDA and compared it with the proposed cpGAN. Finally, the authors have also evaluated the authors’ cpGAN for reconstruction of frontal faces from input profile faces contained in the VGGFace2 dataset.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IET Biometrics
IET Biometrics COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
5.90
自引率
0.00%
发文量
46
审稿时长
33 weeks
期刊介绍: The field of biometric recognition - automated recognition of individuals based on their behavioural and biological characteristics - has now reached a level of maturity where viable practical applications are both possible and increasingly available. The biometrics field is characterised especially by its interdisciplinarity since, while focused primarily around a strong technological base, effective system design and implementation often requires a broad range of skills encompassing, for example, human factors, data security and database technologies, psychological and physiological awareness, and so on. Also, the technology focus itself embraces diversity, since the engineering of effective biometric systems requires integration of image analysis, pattern recognition, sensor technology, database engineering, security design and many other strands of understanding. The scope of the journal is intentionally relatively wide. While focusing on core technological issues, it is recognised that these may be inherently diverse and in many cases may cross traditional disciplinary boundaries. The scope of the journal will therefore include any topics where it can be shown that a paper can increase our understanding of biometric systems, signal future developments and applications for biometrics, or promote greater practical uptake for relevant technologies: Development and enhancement of individual biometric modalities including the established and traditional modalities (e.g. face, fingerprint, iris, signature and handwriting recognition) and also newer or emerging modalities (gait, ear-shape, neurological patterns, etc.) Multibiometrics, theoretical and practical issues, implementation of practical systems, multiclassifier and multimodal approaches Soft biometrics and information fusion for identification, verification and trait prediction Human factors and the human-computer interface issues for biometric systems, exception handling strategies Template construction and template management, ageing factors and their impact on biometric systems Usability and user-oriented design, psychological and physiological principles and system integration Sensors and sensor technologies for biometric processing Database technologies to support biometric systems Implementation of biometric systems, security engineering implications, smartcard and associated technologies in implementation, implementation platforms, system design and performance evaluation Trust and privacy issues, security of biometric systems and supporting technological solutions, biometric template protection Biometric cryptosystems, security and biometrics-linked encryption Links with forensic processing and cross-disciplinary commonalities Core underpinning technologies (e.g. image analysis, pattern recognition, computer vision, signal processing, etc.), where the specific relevance to biometric processing can be demonstrated Applications and application-led considerations Position papers on technology or on the industrial context of biometric system development Adoption and promotion of standards in biometrics, improving technology acceptance, deployment and interoperability, avoiding cross-cultural and cross-sector restrictions Relevant ethical and social issues
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信