2020 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Distinctive Feature Representation for Contactless 3D Hand Biometrics using Surface Normal Directions 基于表面法线方向的非接触式3D手部生物识别特征表示
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304860
Kevin H. M. Cheng, Ajay Kumar
{"title":"Distinctive Feature Representation for Contactless 3D Hand Biometrics using Surface Normal Directions","authors":"Kevin H. M. Cheng, Ajay Kumar","doi":"10.1109/IJCB48548.2020.9304860","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304860","url":null,"abstract":"Contactless 3D hand biometrics offers hygienic and convenient approaches for biometric recognition. This paper investigates a distinctive feature representation using 3D surface normal information for more accurate 3D hand biometric identification. Prior research on contactless 3D hand biometric identification largely incorporates 3D depth and surface curvature information to recover discriminative features. Our investigation presented in this paper indicates that extracting distinctive features from surface normal information, which can also be directly obtained from low-cost photometric stereo based imaging systems, can offer a computationally simpler alternative and is therefore highly desirable. The directions of neighbouring surface normal vectors can encode frequently observed irregular ridge and valley regions, which can enable more accurate surface feature description. Comparative experimental results presented in this paper validates the effectiveness of the proposed approach.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116995937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Leveraging Auxiliary Tasks for Height and Weight Estimation by Multi Task Learning 利用辅助任务进行多任务学习的身高和体重估计
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304855
Dan Han, Jie Zhang, S. Shan
{"title":"Leveraging Auxiliary Tasks for Height and Weight Estimation by Multi Task Learning","authors":"Dan Han, Jie Zhang, S. Shan","doi":"10.1109/IJCB48548.2020.9304855","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304855","url":null,"abstract":"Height and weight, two of the most important biological characteristics of human body, play crucial roles in physical condition estimation. Height and weight estimation with single face image via deep convolutional neural network suffers from poor performance due to lack of labeled data. To address this issue, inspired by the relevance of gender, age, height and weight, we propose an auxiliary-task learning framework, employing multiple relevant tasks to improve the performance of primary tasks. Specifically, gender prediction and age estimation are utilized as auxiliary tasks to assist primary tasks (i.e., height and weight estimation) learning via deep residual auxiliary block. Experiments are conducted on the public VIP-attributes datasets and our private VIPL-MumoFace- WH datasets. Our method outperforms the baseline methods of hard parameter sharing in multi-task learning, demonstrating the effectiveness of auxiliary-task learning framework for height and weight estimation.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127497626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3DPC-Net: 3D Point Cloud Network for Face Anti-spoofing 3DPC-Net:人脸防欺骗的三维点云网络
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304873
Xuan Li, Jun Wan, Yi Jin, Ajian Liu, G. Guo, Stan Z. Li
{"title":"3DPC-Net: 3D Point Cloud Network for Face Anti-spoofing","authors":"Xuan Li, Jun Wan, Yi Jin, Ajian Liu, G. Guo, Stan Z. Li","doi":"10.1109/IJCB48548.2020.9304873","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304873","url":null,"abstract":"Face anti-spoofing plays a vital role in face recognition systems. Most deep learning-based methods directly use 2D images assisted with temporal information (i.e., motion, rPPG) or pseudo-3D information (i.e., Depth). The main drawback of the mentioned methods is that another extra network is needed to generate the depth/rPPG information to assist the backbone network for face anti-spoofing. Different from these methods, we propose a novel method named 3D Point Cloud Network (3DPC-Net). It is an encoder-decoder network that can predict the 3DPC maps to discriminate live faces from spoofing ones. The main traits of the proposed method are that: 1) It is the first time that 3DPC is used for face anti-spoofing; 2) 3DPC-Net is simple and effective and it only relies on 3DPC supervision. Extensive experiments on four databases (i.e., Oulu-NPU, SiW, CASIA-FASD, Replay Attack) have demonstrated that the 3DPC-Net is comparative to the state-of-the-art methods.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126934653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Unconstrained Face Identification using Ensembles trained on Clustered Data 基于聚类数据集合体的无约束人脸识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304882
R. H. Vareto, W. R. Schwartz
{"title":"Unconstrained Face Identification using Ensembles trained on Clustered Data","authors":"R. H. Vareto, W. R. Schwartz","doi":"10.1109/IJCB48548.2020.9304882","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304882","url":null,"abstract":"Open-set face recognition describes a scenario where unknown subjects, unseen during training stage, appear on test time. Not only it requires methods that accurately identify individuals of interest, but also demands approaches that effectively deal with unfamiliar faces. This work details a scalable open-set face identification approach to galleries composed of hundreds and thousands of subjects. It is composed of clustering and ensemble of binary learning algorithms that estimates when query face samples belong to the face gallery and then retrieves their correct identity. The approach selects the most suitable gallery subjects and use the ensemble to improve prediction performance. We carry out experiments on well-known LFW and YTF benchmarks. Results show that competitive performance can be achieved even when targeting scalability.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"33 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116720473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SSBC 2020: Sclera Segmentation Benchmarking Competition in the Mobile Environment SSBC 2020:移动环境下的巩膜分割基准竞争
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304881
M. Vitek, A. Das, Y. Pourcenoux, A. Missler, C. Paumier, S. Das, I. De Ghosh, D. Lucio, L. A. Zanlorensi, D. Menotti, F. Boutros, N. Damer, J. H. Grebe, A. Kuijper, J. Hu, Y. He, C. Wang, H. Liu, Y. Wang, Z. Sun, D. Osorio-Roig, C. Rathgeb, C. Busch, J. Tapia, A. Valenzuela, G. Zampoukis, Lazaros Tsochatzidis, I. Pratikakis, S. Nathan, R. Suganya, V. Mehta, A. Dhall, K. Raja, G. Gupta, J. Khiarak, M. Akbari-Shahper, F. Jaryani, M. Asgari-Chenaghlu, R. Vyas, S. Dakshit, P. Peer, U. Pal, V. Štruc
{"title":"SSBC 2020: Sclera Segmentation Benchmarking Competition in the Mobile Environment","authors":"M. Vitek, A. Das, Y. Pourcenoux, A. Missler, C. Paumier, S. Das, I. De Ghosh, D. Lucio, L. A. Zanlorensi, D. Menotti, F. Boutros, N. Damer, J. H. Grebe, A. Kuijper, J. Hu, Y. He, C. Wang, H. Liu, Y. Wang, Z. Sun, D. Osorio-Roig, C. Rathgeb, C. Busch, J. Tapia, A. Valenzuela, G. Zampoukis, Lazaros Tsochatzidis, I. Pratikakis, S. Nathan, R. Suganya, V. Mehta, A. Dhall, K. Raja, G. Gupta, J. Khiarak, M. Akbari-Shahper, F. Jaryani, M. Asgari-Chenaghlu, R. Vyas, S. Dakshit, P. Peer, U. Pal, V. Štruc","doi":"10.1109/IJCB48548.2020.9304881","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304881","url":null,"abstract":"The paper presents a summary of the 2020 Sclera Segmentation Benchmarking Competition (SSBC), the 7th in the series of group benchmarking efforts centred around the problem of sclera segmentation. Different from previous editions, the goal of SSBC 2020 was to evaluate the performance of sclera-segmentation models on images captured with mobile devices. The competition was used as a platform to assess the sensitivity of existing models to i) differences in mobile devices used for image capture and ii) changes in the ambient acquisition conditions. 26 research groups registered for SSBC 2020, out of which 13 took part in the final round and submitted a total of 16 segmentation models for scoring. These included a wide variety of deep-learning solutions as well as one approach based on standard image processing techniques. Experiments were conducted with three recent datasets. Most of the segmentation models achieved relatively consistent performance across images captured with different mobile devices (with slight differences across devices), but struggled most with low-quality images captured in challenging ambient conditions, i.e., in an indoor environment and with poor lighting.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131903286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Pixel Sampling for Style Preserving Face Pose Editing 像素采样的风格保留面部姿势编辑
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304867
Xiangnan Yin, Di Huang, Hongyu Yang, Zehua Fu, Yunhong Wang, Liming Chen
{"title":"Pixel Sampling for Style Preserving Face Pose Editing","authors":"Xiangnan Yin, Di Huang, Hongyu Yang, Zehua Fu, Yunhong Wang, Liming Chen","doi":"10.1109/IJCB48548.2020.9304867","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304867","url":null,"abstract":"The existing auto-encoder based face pose editing methods primarily focus on modeling the identity preserving ability during pose synthesis, but are less able to preserve the image style properly, which refers to the color, brightness, saturation, etc. In this paper, we take advantage of the well-known frontal/profile optical illusion and present a novel two-stage approach to solve the aforementioned dilemma, where the task of face pose manipulation is cast into face inpainting. By selectively sampling pixels from the input face and slightly adjust their relative locations with the proposed “Pixel Attention Sampling” module, the face editing result faithfully keeps the identity information as well as the image style unchanged. By leveraging high-dimensional embedding at the inpainting stage, finer details are generated. Further, with the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing than merely controlling the yaw angle as usually achieved by the current state-of-the-art. Both the qualitative and quantitative evaluations validate the superiority of the proposed approach.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132493352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Assessment of GANs for Identity-related Applications 身份相关应用的gan评估
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304879
Richard T. Marriott, Safa Madiouni, S. Romdhani, S. Gentric, Liming Chen
{"title":"An Assessment of GANs for Identity-related Applications","authors":"Richard T. Marriott, Safa Madiouni, S. Romdhani, S. Gentric, Liming Chen","doi":"10.1109/IJCB48548.2020.9304879","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304879","url":null,"abstract":"Generative Adversarial Networks (GANs) are now capable of producing synthetic face images of exceptionally high visual quality. In parallel to the development of GANs themselves, efforts have been made to develop metrics to objectively assess the characteristics of the synthetic images, mainly focusing on visual quality and the variety of images. Little work has been done, however, to assess overfitting of GANs and their ability to generate new identities. In this paper we apply a state of the art biometric network to various datasets of synthetic images and perform a thorough assessment of their identity-related characteristics. We conclude that GANs can indeed be used to generate new, imagined identities meaning that applications such as anonymisation of image sets and augmentation of training datasets with distractor images are viable applications. We also assess the ability of GANs to disentangle identity from other image characteristics and propose a novel GAN triplet loss that we show to improve this disentanglement.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116658920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Metric Learning Approach to Eye Movement Biometrics 眼动生物识别的度量学习方法
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304859
D. Lohr, Henry K. Griffith, Samantha Aziz, Oleg V. Komogortsev
{"title":"A Metric Learning Approach to Eye Movement Biometrics","authors":"D. Lohr, Henry K. Griffith, Samantha Aziz, Oleg V. Komogortsev","doi":"10.1109/IJCB48548.2020.9304859","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304859","url":null,"abstract":"Metric learning is a valuable technique for enabling the ongoing enrollment of new users within biometric systems. While this approach has been heavily employed for other biometric modalities such as facial recognition, applications to eye movements have only recently been explored. This manuscript further investigates the application of metric learning to eye movement biometrics. A set of three multilayer perceptron networks are trained for embedding feature vectors describing three classes of eye movements: fixations, saccades, and post-saccadic oscillations. The network is validated on a dataset containing eye movement traces of 269 subjects recorded during a reading task. The proposed algorithm is benchmarked against a previously introduced statistical biometric approach. While mean equal error rate (EER) was increased versus the benchmark method, the proposed technique demonstrated lower dispersion in EER across the four test folds considered herein.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129080007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Is Warping-based Cancellable Biometrics (still) Sensible for Face Recognition? 基于扭曲的可取消生物特征(仍然)适用于人脸识别吗?
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304870
Simon Kirchgasser, A. Uhl, Yoanna Martínez-Díaz, Heydi Mendez Vazquez
{"title":"Is Warping-based Cancellable Biometrics (still) Sensible for Face Recognition?","authors":"Simon Kirchgasser, A. Uhl, Yoanna Martínez-Díaz, Heydi Mendez Vazquez","doi":"10.1109/IJCB48548.2020.9304870","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304870","url":null,"abstract":"We conduct an ISO/IEC Standards 24745 and 30136 compliant assessment of block-based warping sample transformation techniques aiming for template protection. Particular focus is laid on the results' evaluation considering the evolution of face recognition technology ranging from more “historic” hand-crafted features to state-of-the-art deep-learning (DL) based schemes. It turns out that the high robustness of todays face recognition technology can handle geometrical distortions introduced by warping as another form of variability like pose, illumination, and expression variations, thereby disabling the intended protection functionality of warping. Therefore, block-based warping sample transformation must not be used as template protection technique for todays state-of-the-art face recognition schemes, while some settings could be identified providing template protection to some extent for less recent face recognition technology.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117057992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Cross Modal Person Re-identification with Visual-Textual Queries 视觉文本查询的跨模态人物再识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304940
Ammarah Farooq, Muhammad Awais, J. Kittler, A. Akbari, S. S. Khalid
{"title":"Cross Modal Person Re-identification with Visual-Textual Queries","authors":"Ammarah Farooq, Muhammad Awais, J. Kittler, A. Akbari, S. S. Khalid","doi":"10.1109/IJCB48548.2020.9304940","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304940","url":null,"abstract":"Classical person re-identification approaches assume that a person of interest has appeared across different cameras and can be queried by one of the existing images. However, in real-world surveillance scenarios, frequently no visual information will be available about the queried person. In such scenarios, a natural language description of the person by a witness will provide the only source of information for retrieval. In this work, person re-identification using both vision and language information is addressed under all possible gallery and query scenarios. A two stream deep convolutional neural network framework supervised by identity based cross entropy loss is presented. Canonical Correlation Analysis is performed to enhance the correlation between the two modalities in a joint latent embedding space. To investigate the benefits of the proposed approach, a new testing protocol under a multi modal ReID setting is proposed for the test split of the CUHK-PEDES and CUHK-SYSU benchmarks. The experimental results verify that the learnt visual representations are more robust and perform 20% better during retrieval as compared to a single modality system.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130655285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信