Face Recognition from Sequential Sparse 3D Data via Deep Registration

Yang Tan, Hongxin Lin, Zelin Xiao, Shengyong Ding, Hongyang Chao
{"title":"Face Recognition from Sequential Sparse 3D Data via Deep Registration","authors":"Yang Tan, Hongxin Lin, Zelin Xiao, Shengyong Ding, Hongyang Chao","doi":"10.1109/ICB45273.2019.8987284","DOIUrl":null,"url":null,"abstract":"Previous works have shown that face recognition with high accurate 3D data is more reliable and insensitive to pose and illumination variations. Recently, low-cost and portable 3D acquisition techniques like ToF(Time of Flight) and DoE based structured light systems enable us to access 3D data easily, e.g., via a mobile phone. However, such devices only provide sparse(limited speckles in structured light system) and noisy 3D data which can not support face recognition directly. In this paper, we aim at achieving high-performance face recognition for devices equipped with such modules which is very meaningful in practice as such devices will be very popular. We propose a framework to perform face recognition by fusing a sequence of low-quality 3D data. As 3D data are sparse and noisy which can not be well handled by conventional methods like the ICP algorithm, we design a PointNet-like Deep Registration Network(DRNet) which works with ordered 3D point coordinates while preserving the ability of mining local structures via convolution. Meanwhile we develop a novel loss function to optimize our DRNet based on the quaternion expression which obviously outperforms other widely used functions. For face recognition, we design a deep convolutional network which takes the fused 3D depth-map as input based on AMSoftmax model. Experiments show that our DRNet can achieve rotation error 0.95° and translation error 0.28mm for registration. The face recognition on fused data also achieves rank-1 accuracy 99.2%, FAR-0.001 97.5% on Bosphorus dataset which is comparable with state-of-the-art high-quality data based recognition performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Biometrics (ICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICB45273.2019.8987284","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Previous works have shown that face recognition with high accurate 3D data is more reliable and insensitive to pose and illumination variations. Recently, low-cost and portable 3D acquisition techniques like ToF(Time of Flight) and DoE based structured light systems enable us to access 3D data easily, e.g., via a mobile phone. However, such devices only provide sparse(limited speckles in structured light system) and noisy 3D data which can not support face recognition directly. In this paper, we aim at achieving high-performance face recognition for devices equipped with such modules which is very meaningful in practice as such devices will be very popular. We propose a framework to perform face recognition by fusing a sequence of low-quality 3D data. As 3D data are sparse and noisy which can not be well handled by conventional methods like the ICP algorithm, we design a PointNet-like Deep Registration Network(DRNet) which works with ordered 3D point coordinates while preserving the ability of mining local structures via convolution. Meanwhile we develop a novel loss function to optimize our DRNet based on the quaternion expression which obviously outperforms other widely used functions. For face recognition, we design a deep convolutional network which takes the fused 3D depth-map as input based on AMSoftmax model. Experiments show that our DRNet can achieve rotation error 0.95° and translation error 0.28mm for registration. The face recognition on fused data also achieves rank-1 accuracy 99.2%, FAR-0.001 97.5% on Bosphorus dataset which is comparable with state-of-the-art high-quality data based recognition performance.
基于深度配准的序列稀疏三维人脸识别
先前的研究表明,具有高精度3D数据的人脸识别更加可靠,并且对姿态和光照变化不敏感。最近,低成本和便携式3D采集技术,如ToF(飞行时间)和基于DoE的结构光系统,使我们能够轻松访问3D数据,例如通过手机。然而,这种设备只能提供稀疏(结构光系统中有限的斑点)和嘈杂的3D数据,不能直接支持人脸识别。在本文中,我们的目标是为配备这些模块的设备实现高性能的人脸识别,这在实践中非常有意义,因为这样的设备将非常受欢迎。我们提出了一个框架,通过融合一系列低质量的3D数据来执行人脸识别。针对三维数据具有稀疏和噪声的特点,采用ICP算法等传统方法无法很好地处理三维数据,设计了一种类似点网的深度配准网络(DRNet),该网络可以处理有序的三维点坐标,同时保留了通过卷积挖掘局部结构的能力。同时,我们开发了一种新的基于四元数表达式的损失函数来优化我们的DRNet,它明显优于其他广泛使用的函数。在人脸识别方面,基于AMSoftmax模型,设计了以融合的三维深度图为输入的深度卷积网络。实验表明,我们的DRNet可以实现旋转误差0.95°和平移误差0.28mm的配准。融合数据上的人脸识别准确率达到了99.2%,在博斯普鲁斯数据集上的FAR-0.001准确率达到了97.5%,与目前最先进的高质量数据识别性能相当。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信