{"title":"Learning-based local-patch resolution reconstruction of iris smart-phone images","authors":"F. Alonso-Fernandez, R. Farrugia, J. Bigün","doi":"10.1109/BTAS.2017.8272771","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272771","url":null,"abstract":"Application of ocular biometrics in mobile and at a distance environments still has several open challenges, with the lack quality and resolution being an evident issue that can severely affects performance. In this paper, we evaluate two trained image reconstruction algorithms in the context of smart-phone biometrics. They are based on the use of coupled dictionaries to learn the mapping relations between low and high resolution images. In addition, reconstruction is made in local overlapped image patches, where up-scaling functions are modelled separately for each patch, allowing to better preserve local details. The experimental setup is complemented with a database of 560 images captured with two different smart-phones, and two iris comparators employed for verification experiments. We show that the trained approaches are substantially superior to bilinear or bicubic interpolations at very low resolutions (images of 13×13 pixels). Under such challenging conditions, an EER of ∼7% can be achieved using individual comparators, which is further pushed down to 4–6% after the fusion of the two systems.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124365845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Iris and periocular recognition in arabian race horses using deep convolutional neural networks","authors":"Mateusz Trokielewicz, M. Szadkowski","doi":"10.1109/BTAS.2017.8272736","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272736","url":null,"abstract":"This paper presents a study devoted to recognizing horses by means of their iris and periocular features using deep convolutional neural networks (DCNNs). Identification of race horses is crucial for animal identity confirmation prior to racing. As this is usually done shortly before a race, fast and reliable methods that are friendly and inflict no harm upon animals are important. Iris recognition has been shown to work with horse irides, provided that algorithms deployed for such task are fine-tuned for horse irides and input data is of very high quality. In our work, we examine a possibility of utilizing deep convolutional neural networks for a fusion of both iris and periocular region features. With such methodology, ocular biometrics in horses could perform well without employing complicated algorithms that require a lot offline-tuning and prior knowledge of the input image, while at the same time being rotation, translation, and to some extent also image quality invariant. We were able to achieve promising results, with EER=9.5%o using two network architectures with score-level fusion.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124243884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Ouyang, Jianjiang Feng, Jiwen Lu, Zhenhua Guo, Jie Zhou
{"title":"Fingerprint pose estimation based on faster R-CNN","authors":"J. Ouyang, Jianjiang Feng, Jiwen Lu, Zhenhua Guo, Jie Zhou","doi":"10.1109/BTAS.2017.8272707","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272707","url":null,"abstract":"Fingerprint pose estimation is one of the bottlenecks of indexing in large scale database. The existing methods of pose estimation are based on manually appointed features (e.g. special points, ridges, orientation filed). In this paper, we propose a method based on deep learning to achieve accurate pose estimation. Faster R-CNN is adopted to detect the center point and rough direction, followed by intra-class and inter-class combination to calculate the precise direction. Extensive experiments on NIST-14 show that (1) the predicted poses are close to manual annotations even when the fingerprints are incomplete or noisy, (2) the estimated poses for matching fingerprint pairs are very consistent and (3) by registering fingerprints using the estimated pose, the accuracy of a state-of-the-art fingerprint indexing system is further improved.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115200803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep expectation for estimation of fingerprint orientation fields","authors":"Patrick Schuch, Simon-Daniel Schulz, C. Busch","doi":"10.1109/BTAS.2017.8272697","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272697","url":null,"abstract":"Estimation of the orientation field is one of the key challenges during biometric feature extraction from a fingerprint sample. Many important processing steps rely on an accurate and reliable estimation. This is especially challenging for samples of low quality, for which in turn accurate preprocessing is essential. Regressional Convolutional Neural Networks have shown their superiority for bad quality samples in the independent benchmark framework FVC-ongoing. This work proposes to incorporate Deep Expectation. Options for further improvements are evaluated in this challenging environment of low quality images and small amount of training data. The findings from the results improve the new algorithm called DEX-OF. Incorporating Deep Expectation, improved regularization, and slight model changes DEX-OF achieves an RMSE of 7.52° on the bad quality dataset and 4.89° at the good quality dataset at FVC-ongoing. These are the best reported error rates so far.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129804843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinnian Wang, Huiyu Wang, Qi-Chang Cheng, Namusisi Linda Nankabirwa, Zhang Tao
{"title":"Single 2D pressure footprint based person identification","authors":"Xinnian Wang, Huiyu Wang, Qi-Chang Cheng, Namusisi Linda Nankabirwa, Zhang Tao","doi":"10.1109/BTAS.2017.8272725","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272725","url":null,"abstract":"Footprints carry many important human characteristics, such as anatomical structures of the foot, skin texture of the foot sole, standing or walking habits, and so on. They play vital roles in forensic investigations as an alternative biometric. In this paper, we propose an automatic footprint based person identification method using a single bare or socked footprint, which differs from the existing bare footprint based methods. An area rank filter is put forward to remove dust noises. Pressure distribution prior of the hind footprint is proposed to estimate the footprint direction. Both Geometrical Shape Spectrum Representation and Pressure Radial Gradient Map are proposed to represent a footprint in views of geometric shape, anatomical structure and one's standing or walking habits, which are also rotation and translation invariant. We also put forward a regional confidence value based method to compute the similarity values between two footprints. Additionally, we have constructed an evaluation dataset composed of 480 subjects and 19200 bare or socked footprints. Experimental results show that the proposed algorithm outperforms state of the-art algorithms, and its recognition rate reaches 98.75%.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128249469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Boosting cross-age face verification via generative age normalization","authors":"G. Antipov, M. Baccouche, J. Dugelay","doi":"10.1109/BTAS.2017.8272698","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272698","url":null,"abstract":"Despite the tremendous progress in face verification performance as a result of Deep Learning, the sensitivity to human age variations remains an Achilles' heel of the majority of the contemporary face verification software. A promising solution to this problem consists in synthetic aging/rejuvenation of the input face images to some predefined age categories prior to face verification. We recently proposed [3] Age-cGAN aging/rejuvenation method based on generative adversarial neural networks allowing to synthesize more plausible and realistic faces than alternative non-generative methods. However, in this work, we show that Age-cGAN cannot be directly used for improving face verification due to its slightly imperfect preservation of the original identities in aged/rejuvenated faces. We therefore propose Local Manifold Adaptation (LMA) approach which resolves the stated issue of Age-cGAN resulting in the novel Age-cGAN+LMA aging/rejuvenation method. Based on Age-cGAN+LMA, we design an age normalization algorithm which boosts the accuracy of an off-the-shelf face verification software in the cross-age evaluation scenario.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126690955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-pose landmark localization using multi-dropout framework","authors":"G. Hsu, Cheng-Hua Hsieh","doi":"10.1109/BTAS.2017.8272722","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272722","url":null,"abstract":"We propose the Multiple Dropout Framework (MDF) for facial landmark localization across large poses. Unlike most landmark detectors only work for poses less than 45 degree in yaw, the proposed MDF works for pose as large as 90 degree, i.e., full profile. In the proposed MDF, the Single Shot Multibox Detector (SSD) [10] is tailored for fast and precise face detection. Given an SSD detected face, a Multiple Dropout Network (MDN) is proposed to classify the face into either frontal or profile pose, and for each pose another MDN is configured for detecting pose-oriented landmarks. As the MDF framework contains one MDN (pose) classifier and two MDN (landmark) regressors, this study aims to determine the MDN structures and settings appropriate for handling classification and regression tasks. The MDN framework demonstrates the following advantages and observations. (1) Landmark detection across poses can be better approached by incorporating a pose classifier with pose-oriented landmark regressors. (2) Multiple dropouts are required for stabilizing the training of regressor networks. (3) Additional hand-crafted features, such as the Local Binary Pattern (LBP), can improve the accuracy of landmark localization. (4) Face profiling is a powerful tool for offering a large cross-pose training set. A comparison study on benchmark databases shows that the MDN delivers a competitive performance to the state-of-the-art approaches for face alignment across large poses.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133256202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein
{"title":"A decision-level fusion strategy for multimodal ocular biometric in visible spectrum based on posterior probability","authors":"Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein","doi":"10.1109/BTAS.2017.8272772","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272772","url":null,"abstract":"In this work, we propose a posterior probability-based decision-level fusion strategy for multimodal ocular biometric in the visible spectrum employing iris, sclera and peri-ocular trait. To best of our knowledge this is the first attempt to design a multimodal ocular biometrics using all three ocular traits. Employing all these traits in combination can help to increase the reliability and universality of the system. For instance in some scenarios, the sclera and iris can be highly occluded or for completely closed eyes scenario, the peri-ocular trait can be relied on for the decision. The proposed system is constituted of three independent traits and their combinations. The classification output of the trait which produces highest posterior probability is to consider as the final decision. An appreciable reliability and universal applicability of ocular trait are achieved in experiments conducted employing the proposed scheme.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131678617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pablo Fernández López, Jorge Sanchez-Casanova, Paloma Tirado-Martin, J. Liu-Jimenez
{"title":"Optimizing resources on smartphone gait recognition","authors":"Pablo Fernández López, Jorge Sanchez-Casanova, Paloma Tirado-Martin, J. Liu-Jimenez","doi":"10.1109/BTAS.2017.8272679","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272679","url":null,"abstract":"Inertial gait recognition is a biometric modality with increasing interest. Gait recognition in smartphones could become one of the most user-friendly recognition systems. Some state-of-art algorithms need to perform cross-comparisons of gait cycles to obtain a comparison result. In this contribution, two facts are studied in order to reduce the computational cost: the influence of using representative gait cycles and the gait signals length. The results obtained show that cross-comparisons could be performed with representative gait cycles without heavily penalizing accuracy and reducing computational cost, and that selecting representative gait cycles from the end of the signal perform better that the ones on the beginning.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124111905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maneet Singh, Shruti Nagpal, Mayank Vatsa, Richa Singh, A. Noore, A. Majumdar
{"title":"Gender and ethnicity classification of Iris images using deep class-encoder","authors":"Maneet Singh, Shruti Nagpal, Mayank Vatsa, Richa Singh, A. Noore, A. Majumdar","doi":"10.1109/BTAS.2017.8272755","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272755","url":null,"abstract":"Soft biometric modalities have shown their utility in different applications including reducing the search space significantly. This leads to improved recognition performance, reduced computation time, and faster processing of test samples. Some common soft biometric modalities are ethnicity, gender, age, hair color, iris color, presence of facial hair or moles, and markers. This research focuses on performing ethnicity and gender classification on iris images. We present a novel supervised auto-encoder based approach, Deep Class-Encoder, which uses class labels to learn discriminative representation for the given sample by mapping the learned feature vector to its label. The proposed model is evaluated on two datasets each for ethnicity and gender classification. The results obtained using the proposed Deep Class-Encoder demonstrate its effectiveness in comparison to existing approaches and state-of-the-art methods.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126702792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}