DSU-GAN: A robust frontal face recognition approach based on generative adversarial network

IF 4.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Deyu Lin , Huanxin Wang , Xin Lei , Weidong Min , Chenguang Yao , Yuan Zhong , Yong Liang Guan
{"title":"DSU-GAN: A robust frontal face recognition approach based on generative adversarial network","authors":"Deyu Lin ,&nbsp;Huanxin Wang ,&nbsp;Xin Lei ,&nbsp;Weidong Min ,&nbsp;Chenguang Yao ,&nbsp;Yuan Zhong ,&nbsp;Yong Liang Guan","doi":"10.1016/j.cviu.2024.104128","DOIUrl":null,"url":null,"abstract":"<div><div>Face recognition technology is widely used in different areas, such as entrance guard, payment <em>etc</em>. However, little attention has been given to non-positive faces recognition, especially model training and the quality of the generated images. To this end, a novel robust frontal face recognition approach based on generative adversarial network (DSU-GAN) is proposed in this paper. A mechanism of consistency loss is presented in deformable convolution proposed in the generator-encoder to avoid additional computational overhead and the problem of overfitting. In addition, a self-attention mechanism is presented in generator–encoder to avoid information overloading and construct the long-term dependencies at the pixel level. To balance the capability between the generator and discriminator, a novelf discriminator architecture based U-Net is proposed. Finally, the single-way discriminator is improved through a new up-sampling module. Experiment results demonstrate that our proposal achieves an average Rank-1 recognition rate of 95.14% on the Multi-PIE face dataset in dealing with the multi-pose. In addition, it is proven that our proposal has achieved outstanding performance in recent benchmarks conducted on both IJB-A and IJB-C.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"249 ","pages":"Article 104128"},"PeriodicalIF":4.3000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224002091","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Face recognition technology is widely used in different areas, such as entrance guard, payment etc. However, little attention has been given to non-positive faces recognition, especially model training and the quality of the generated images. To this end, a novel robust frontal face recognition approach based on generative adversarial network (DSU-GAN) is proposed in this paper. A mechanism of consistency loss is presented in deformable convolution proposed in the generator-encoder to avoid additional computational overhead and the problem of overfitting. In addition, a self-attention mechanism is presented in generator–encoder to avoid information overloading and construct the long-term dependencies at the pixel level. To balance the capability between the generator and discriminator, a novelf discriminator architecture based U-Net is proposed. Finally, the single-way discriminator is improved through a new up-sampling module. Experiment results demonstrate that our proposal achieves an average Rank-1 recognition rate of 95.14% on the Multi-PIE face dataset in dealing with the multi-pose. In addition, it is proven that our proposal has achieved outstanding performance in recent benchmarks conducted on both IJB-A and IJB-C.
DSU-GAN:基于生成式对抗网络的稳健正面人脸识别方法
人脸识别技术被广泛应用于不同的领域,如门禁、支付、医疗、教育等。然而,人们很少关注非正面人脸识别,尤其是模型训练和生成图像的质量。为此,本文提出了一种基于生成式对抗网络(DSU-GAN)的新型鲁棒正面人脸识别方法。为了增强生成器在学习姿态变化的人脸图像时的鲁棒性,在生成器-编码器中提出了可变形卷积。在可变形卷积中提出了一致性损失机制,以避免额外的计算开销和过拟合问题。此外,在生成器-编码器中还提出了一种自我关注机制,以避免信息过载,该机制能够在像素级构建特征图中任意两个位置的长期依赖关系。为了平衡生成器和鉴别器之间的能力,提出了一种基于 U-Net 的新型鉴别器架构。最后,通过一个新的上采样模块改进了单向判别器。实验结果表明,我们的建议在处理多用途人脸时,在 Multi-PIE 人脸数据集上实现了 95.14% 的平均 Rank-1 识别率。此外,在最近进行的 IJB-A 和 IJB-C 基准测试中,我们的方案也取得了优异的成绩。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Vision and Image Understanding
Computer Vision and Image Understanding 工程技术-工程:电子与电气
CiteScore
7.80
自引率
4.40%
发文量
112
审稿时长
79 days
期刊介绍: The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views. Research Areas Include: • Theory • Early vision • Data structures and representations • Shape • Range • Motion • Matching and recognition • Architecture and languages • Vision systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信