Shiqi Yu, Yongzhen Huang, Liang Wang, Yasushi Makihara, Edel B. García Reyes, Feng Zheng, Md Atiqur Rahman Ahad, Beibei Lin, Yuchao Yang, Haijun Xiong, Bin Huang, Yuxuan Zhang
{"title":"HID 2021: Competition on Human Identification at a Distance 2021","authors":"Shiqi Yu, Yongzhen Huang, Liang Wang, Yasushi Makihara, Edel B. García Reyes, Feng Zheng, Md Atiqur Rahman Ahad, Beibei Lin, Yuchao Yang, Haijun Xiong, Bin Huang, Yuxuan Zhang","doi":"10.1109/IJCB52358.2021.9484377","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484377","url":null,"abstract":"The Competition on Human Identification at a Distance 2021 (HID 2021) is to promote the research in human identification at a distance and to provide a benchmark to evaluate different methods. HID 2021 is the second follow-up from the first one, HID 2020. The dataset size and the evaluation protocal are the same with the previous competition, but the data in the test set has been changed. The paper firstly introduces the dataset and the evaluation protocol, then describes the methods from the top teams and their results. The methods show how to achieve state-of-the-art performance on gait recognition. The results in HID 2021 are better than those in HID 2020. From the comparisons and analysis, some useful conclusions can be drawn. We hope more improvements can be achieved by better followup competitions.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114979948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Orientation Field Estimation for Latent Fingerprints with Prior Knowledge of Fingerprint Pattern","authors":"Yongjie Duan, Jianjiang Feng, Jiwen Lu, Jie Zhou","doi":"10.1109/IJCB52358.2021.9484334","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484334","url":null,"abstract":"Estimating orientation field for latent fingerprints plays a crucial role in latent fingerprints recognition systems. Due to poor quality and small area of latent fingerprints, however, the performance of the state-of-the-art algorithms is still far from satisfactory. Considering the intrinsic characteristics of fingerprints that the distribution of orientation field varies with the fingerprint patterns, we propose an orientation field estimation algorithm for latent fingerprints based on residual learning using prior knowledge of fingerprint patterns. Specifically, statistical distribution models of orientation field, for different fingerprint patterns, are calculated based on a large database consisting of 14,000 fingerprints with good quality using clustering method. The residual orientation fields and reliability scores, indicating the consistency with different statistical orientation models, are estimated using a deep network, named RefNet. Then the final orientation field is obtained by fusing the estimations according to their corresponding reliability scores. Experimental results on the widely used latent database NIST SD27 demonstrate that the proposed algorithm provides higher orientation field estimation accuracy compared with the state-of-the-art methods, and by enhancing latent fingerprints using estimated orientation field, the identification performance is further improved.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128433967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Face Point Cloud Super-Resolution Network","authors":"Jiaxin Li, Feiyu Zhu, X. Yang, Qijun Zhao","doi":"10.1109/IJCB52358.2021.9484379","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484379","url":null,"abstract":"With the development of consumer-level depth sensors, 3D face point cloud data can be easily captured now. However, such data are often accompanied by low resolution, noise, and holes. At the same time, high-precision 3D scanners are bulky and can not be widely used in daily applications due to costs and inconvenience. To fill the gap between low and high resolution 3D faces, we propose a two-stage framework named the face point cloud super-resolution network (FPSRN) to recover high-resolution 3D face data from the low-resolution counterparts. As the human faces can be aligned into a unified coordinate system, we formulate point cloud super-resolution as a z-coordinate prediction problem. Cascaded auto-encoders are employed to retain both global structure and boundary information of different face regions during super-resolution. Compared with state- of-the-art point cloud completion methods and depth estimation methods, our method improves the Earth-Mover’s Distance (EMD) and the Root Mean Square Error (RMSE) metrics by 43% and 25%, respectively.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130545419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Dense Pyramid Convolution Network for Infant Fingerprint Super-Resolution and Enhancement","authors":"Yelin Shi, Manhua Liu","doi":"10.1109/IJCB52358.2021.9484397","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484397","url":null,"abstract":"Fingerprint recognition has been widely investigated and achieved great success for personal recognition. Most of existing fingerprint recognition algorithms can work well on adults but cannot be directly used for children, especially for infants. Compared with adult fingerprints, the size of infant fingerprints is smaller with lower resolution under the same acquisition conditions. In addition, infant fingerprint images suffer from various degradations from the physiological effects and bad collection conditions. Some studies focused on using high-quality and high-resolution sensors to capture infant fingerprints for reliable recognition, which will increase the costs. In this paper, we propose a deep learning based method to perform the super-resolution and enhancement of infant fingerprints by an end-to-end way for more reliable recognition, which is compatible with the existing recognition system. In this method, a dense pyramid convolution neural network is built for joint deep learning of fingerprint super-resolution and enhancement, with a minutia attention block added for more accurate reconstruction of local details. The network is trained with adult fingerprints for image transformation and tested on infant fingerprint dataset. Experimental results show that the proposed method achieves promising improvements for infant fingerprint recognition.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131719783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandip Purnapatra, Nic Smalt, Keivan Bahmani, Priyanka Das, David Yambay, A. Mohammadi, Anjith George, T. Bourlai, S. Marcel, S. Schuckers, Meiling Fang, N. Damer, F. Boutros, Arjan Kuijper, Alperen Kantarci, Basar Demir, Zafer Yildiz, Zabi Ghafoory, Hasan Dertli, H. K. Ekenel, Son Vu, V. Christophides, Liang Dashuang, Zhang Guanghao, Hao Zhanlong, Liu Junfu, Jin Yufeng, Samo Liu, Samuel Huang, Salieri Kuei, Jag Mohan Singh, Raghavendra Ramachandra
{"title":"Face Liveness Detection Competition (LivDet-Face) - 2021","authors":"Sandip Purnapatra, Nic Smalt, Keivan Bahmani, Priyanka Das, David Yambay, A. Mohammadi, Anjith George, T. Bourlai, S. Marcel, S. Schuckers, Meiling Fang, N. Damer, F. Boutros, Arjan Kuijper, Alperen Kantarci, Basar Demir, Zafer Yildiz, Zabi Ghafoory, Hasan Dertli, H. K. Ekenel, Son Vu, V. Christophides, Liang Dashuang, Zhang Guanghao, Hao Zhanlong, Liu Junfu, Jin Yufeng, Samo Liu, Samuel Huang, Salieri Kuei, Jag Mohan Singh, Raghavendra Ramachandra","doi":"10.1109/IJCB52358.2021.9484359","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484359","url":null,"abstract":"Liveness Detection (LivDet)-Face is an international competition series open to academia and industry. The competition’s objective is to assess and report state-of-the-art in liveness / Presentation Attack Detection (PAD) for face recognition. Impersonation and presentation of false samples to the sensors can be classified as presentation attacks and the ability for the sensors to detect such attempts is known as PAD. LivDet-Face 2021 * will be the first edition of the face liveness competition. This competition serves as an important benchmark in face presentation attack detection, offering (a) an independent assessment of the current state of the art in face PAD, and (b) a common evaluation protocol, availability of Presentation Attack Instruments (PAI) and live face image dataset through the Biometric Evaluation and Testing (BEAT) platform. The competition can be easily followed by researchers after it is closed, in a platform in which participants can compare their solutions against the LivDet-Face winners.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132700089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roberto Casula, Marco Micheletto, G. Orrú, Rita Delussu, S. Concas, Andrea Panzino, G. Marcialis
{"title":"LivDet 2021 Fingerprint Liveness Detection Competition - Into the unknown","authors":"Roberto Casula, Marco Micheletto, G. Orrú, Rita Delussu, S. Concas, Andrea Panzino, G. Marcialis","doi":"10.1109/IJCB52358.2021.9484399","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484399","url":null,"abstract":"The International Fingerprint Liveness Detection Competition is an international biennial competition open to academia and industry with the aim to assess and report advances in Fingerprint Presentation Attack Detection. The proposed \"Liveness Detection in Action\" and \"Fingerprint representation\" challenges were aimed to evaluate the impact of a PAD embedded into a verification system, and the effectiveness and compactness of feature sets for mobile applications. Furthermore, we experimented a new spoof fabrication method that has particularly affected the final results. Twenty-three algorithms were submitted to the competition, the maximum number ever achieved by LivDet.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115098051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CAS-AIR-3D Face: A Low-Quality, Multi-Modal and Multi-Pose 3D Face Database","authors":"Qi Li, Xiaoxiao Dong, Weining Wang, C. Shan","doi":"10.1109/IJCB52358.2021.9484332","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484332","url":null,"abstract":"Benefiting from deep learning with large scale face databases, 2D face recognition has made significant progress in recent years. However, it still highly depends on lighting conditions and human poses, and suffers from face spoofing problem. In contrast, 3D face recognition reveals a new path that can overcome the previous limitations of 2D face recognition. One of the most important problems for 3D face recognition is to construct a suitable database, which can be exploited to train different 3D face recognition algorithms. In this work, we propose a new database, CAS-AIR-3D Face, for low-quality 3D face recognition. It includes 24713 videos from 3093 individuals, which is captured by Intel RealSense SR305. The database contains three modalities: color, depth and near infrared, and is rich in pose, expression, occlusion and distance variations. To the best of our konwledge, CAS-AIR-3D Face is the largest low-quality 3D face database in terms of the number of individuals and the sample variations. Moreover, we preprocess the data via a sophisticated face alignment method, and Point Cloud Spherical Cropping Method (SCM) is leveraged to remove the background noise in the depth images. Finally, an evaluation protocol is designed for fair comparison, and extensive experiments are conducted with different backbone networks to provide different baselines on this database.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128259632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face Morphing of Newborns Can Be Threatening Too : Preliminary Study on Vulnerability and Detection","authors":"S. Venkatesh, Raghavendra Ramachandra, K. Raja","doi":"10.1109/IJCB52358.2021.9484367","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484367","url":null,"abstract":"Face morphing attacks are evolving as a significant threat to the Face Recognition Systems (FRS) operating in border control and passport issuance. As newborn face has very limited discriminative facial characteristics, it is challenging for both human and machines to verify the newborns based on the facial biometrics accurately. Further, the introduction of face morphing elevates the problem of baby trafficking as it can challenge both human and machine-based facial verification. In this paper, we pose a question if the morphed images of newborns can threaten FRS and present first systematic study on the vulnerability analysis of FRS towards morphed faces of newborns. To effectively benchmark threat of newborns’ facial morphing attacks, we introduce a new face morphing dataset constructed based on 42 unique newborns with 852 bona fide and 2451 morphing images. Extensive experiments are carried out on the newly constructed dataset to benchmark the vulnerability against both Commercial-Off-The-Shelf (COTS) FRS (Cognitec FaceVACS-SDK Version 9.4.2) and deep learning based FRS (Arcface) for three different morphing factors. Further, we also evaluate the performance of Morphing Attack Detection (MAD) in detecting such morphing attacks of newborn faces. We conduct experiments on four different Off-The-Shelf MAD techniques to benchmark the detection performance on newborn morph attacks.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127048695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sponsors for 2021 International Joint Conference on Biometrics","authors":"","doi":"10.1109/ijcb52358.2021.9521654","DOIUrl":"https://doi.org/10.1109/ijcb52358.2021.9521654","url":null,"abstract":"","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131679533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TDS-Net: Towards Fast Dynamic Random Hand Gesture Authentication via Temporal Difference Symbiotic Neural Network","authors":"Wen-Bing Song, Wenxiong Kang, Yulin Yang, Linpu Fang, Chang Liu, Xingyan Liu","doi":"10.1109/IJCB52358.2021.9484390","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484390","url":null,"abstract":"Hand gesture is a new emerging biometric trait containing both physiological and behavioral characteristics. With the popularity of various cameras, and the rich identity features and contactless authentication mode embedded in gestures themselves, vision-based hand gesture authentication has great potential value. However, current hand gesture authentication methods heavily rely on defined gestures and require identical enrollment and verification gestures, which limits the user-friendliness and efficiency of authentication. It is arguably true that authentication in a simpler and faster way, without the need to remember gestures, will be more approachable. Thus, a fast dynamic random hand gesture authentication method is introduced, in which users can perform a random improvised gesture in both the enrollment and verification stage. To better utilize the physiological and behavioral characteristics of hand gestures, an efficient network named Temporal Difference Symbiotic Neural Network (TDS-Net) equipped with our designed behavioral energy-based feature fusion module (BE-Fusion module) is proposed. Extensive experiments on the SCUT-DHGA dataset demonstrate that TDS-Net outperforms the recent state-of-the-art methods.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127674621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}