Sinan H. Alkassar, W. L. Woo, S. Dlay, J. Chambers
{"title":"Enhanced segmentation and complex-sclera features for human recognition with unconstrained visible-wavelength imaging","authors":"Sinan H. Alkassar, W. L. Woo, S. Dlay, J. Chambers","doi":"10.1109/ICB.2016.7550049","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550049","url":null,"abstract":"Sclera recognition has received attention recently due to the distinctive features extracted from blood vessels within the sclera. However, uncontrolled human pose, multiple iris gaze directions, different eye image capturing distance and variation in lighting conditions lead to many challenges in sclera recognition. Therefore, we propose an enhanced system for sclera recognition with visible-wavelength eye images captured in unconstrained conditions. The proposed segmentation algorithm fuses multiple color space skin classifiers to overcome the noise factors introduced through acquiring sclera images such as motion, blur, gaze and rotation. We also propose a blood vessel enhancement and feature extraction method which we denote as complex-sclera features to increase the adaptability to noisy blood vessel deformations. The proposed system is evaluated using UBIRIS.v1, UBIRIS.v2 and UTIRIS databases and the results are promising in terms of accuracy and suitability in real-time applications due to low processing times.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Cao, T. Chugh, Jiayu Zhou, Elham Tabassi, Anil K. Jain
{"title":"Automatic latent value determination","authors":"Kai Cao, T. Chugh, Jiayu Zhou, Elham Tabassi, Anil K. Jain","doi":"10.1109/ICB.2016.7550084","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550084","url":null,"abstract":"Latent fingerprints are the most frequently encountered and reliable crime scene evidence used in forensics investigations. Automatic methods for quantitative assessment of a latent in terms of (i) value for individualization (VID), (ii) value for exclusion only (VEO), and (iii) no value (NV), are needed to minimize the workload of latent examiners so that they can pay more attention to challenging prints (VID and NV latents). Current value determination is either made by examiners or predicted given manually annotated features. Because both of these approaches depend on human markup, they are subjective and time consuming. We propose a fully automatic method for latent value determination based on the number, reliability, and compactness of the minutiae, ridge quality, ridge flow, and the number of core and delta points. Given the small number of latents with VEO and NV labels in two latent databases available to us (NIST SD27 and WVU), only a two-class value determination is considered, namely VID and VID̅, where the VID̅ class contains VEO and NV latents. Experimental results show that the value determination by the proposed method (i) obviates the need for examiner markup while maintaining the accuracy of value determination and (ii) can predict the AFIS performance better than examiners.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122590807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A practical technique for gait recognition on curved and straight trajectories","authors":"Fatimah Abdulsattar, J. Carter","doi":"10.1109/ICB.2016.7550059","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550059","url":null,"abstract":"Many studies show the effectiveness of gait in surveillance and access control scenarios. However, appearance changes due to walking direction changes impose a challenge for gait recognition techniques that assume people only walk in a straight line. In this paper, the effect of walking along straight and curved path is studied by proposing a practical technique which is based on the three key frames in the start, middle and end of the gait cycle. The position of these frames is estimated in 3D space which is then used to estimate the local walking direction in the first and second part of the cycle. The technique used 3D volume sequences of the people to adapt to changes in the walking direction. The performance is evaluated using a newly collected dataset and the Kyushu University 4D Gait Dataset, containing people walking in straight lines and curves. With the proposed technique, we obtain a correct classification rate of 98% for matching straight with straight walking and 81% for matching straight with curved walking averaged over both datasets. The variation in walking patterns when a person walks along a straight or curved path is most likely to be responsible for the difference. In support of this, the recognition rate when matching curved with curved walking is 99% on our dataset.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128443110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. P. Koppen, W. Christmas, D. Crouch, Walter F. Bodmer, J. Kittler
{"title":"Extending non-negative matrix factorisation to 3D registered data","authors":"W. P. Koppen, W. Christmas, D. Crouch, Walter F. Bodmer, J. Kittler","doi":"10.1109/ICB.2016.7550083","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550083","url":null,"abstract":"The use of non-negative matrix factorisation (NMF) on 2D face images has been shown to result in sparse feature vectors that encode for local patches on the face, and thus provides a statistically justified approach to learning parts from wholes. However successful on 2D images, the method has so far not been extended to 3D images. The main reason for this is that 3D space is a continuum and so it is not apparent how to represent 3D coordinates in a non-negative fashion. This work compares different non-negative representations for spatial coordinates, and demonstrates that not all non-negative representations are suitable. We analyse the representational properties that make NMF a successful method to learn sparse 3D facial features. Using our proposed representation, the factorisation results in sparse and interpretable facial features.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127225085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasushi Makihara, Takuhiro Kimura, Fumio Okura, Ikuhisa Mitsugami, Masataka Niwa, Chihiro Aoki, Atsuyuki Suzuki, D. Muramatsu, Y. Yagi
{"title":"Gait collector: An automatic gait data collection system in conjunction with an experience-based long-run exhibition","authors":"Yasushi Makihara, Takuhiro Kimura, Fumio Okura, Ikuhisa Mitsugami, Masataka Niwa, Chihiro Aoki, Atsuyuki Suzuki, D. Muramatsu, Y. Yagi","doi":"10.1109/ICB.2016.7550090","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550090","url":null,"abstract":"Biometric data collection is an important first step toward biometrics research practice, although it is a considerably laborious task, particularly for behavioral biometrics such as gait. We therefore propose an automatic gait data collection system in conjunction with an experience-based exhibition. In the exhibition, participants enjoy an attractive online demonstration of state-of-the-art video-based gait analysis comprising intuitive gait feature measurement and gait-based age estimation while we simultaneously collect their gait data along with informed consent. At the time of this publication, we are holding the exhibition in association with a science museum and have successfully collected the gait data of 47,615 subjects over 246 days, which has already exceeded the size of the largest existing gait database in the world.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121836855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramachandra Raghavendra, K. Raja, Vinay Krishna Vemuri, Swetha Kumari, Pierre Gacon, E. Krichen, C. Busch
{"title":"Influence of cataract surgery on iris recognition: A preliminary study","authors":"Ramachandra Raghavendra, K. Raja, Vinay Krishna Vemuri, Swetha Kumari, Pierre Gacon, E. Krichen, C. Busch","doi":"10.1109/ICB.2016.7550067","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550067","url":null,"abstract":"Iris biometrics is considered as the unique and accurate biometric characteristics that are suited for large scale applications such as India's AADHAR, CANPASS, and many other national ID programs. However, the accuracy of the iris recognition is observed to degrade when eye (or iris) is affected by the diseases. Among many other eye diseases, cataract results in a cloud formation on the eye lens and is a potential problem, especially in the developing countries. In this paper, we present a preliminary study on 84 data subjects to reveal the effect of cataract on iris recognition performance. We investigate three different scenarios, which involve: (1) Enrolment and recognition of affected eyes (pre-operated eye) (2) Enrolement and recognition of operated eyes (post-operated eye) (3) Enrolment with affected eyes and recognition with operated eyes. Extensive experiments are carried out using five different academic and commercial iris recognition methods to get complete insight on the impact of cataract on the recognition performance. Results obtained on our database indicate the degraded performance in verification of the cataract operated eye.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123904363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Belief models for integrating match scores with liveness and quality measures in a fingerprint verification system","authors":"Yaohui Ding, A. Rattani, A. Ross","doi":"10.1109/ICB.2016.7550095","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550095","url":null,"abstract":"Recent research has sought to improve the resilience of fingerprint verification systems to spoof attacks by combining match scores with both liveness measures and image quality in a learning-based fusion framework. Designing such a fusion framework is challenging because quality and liveness measures can impact the match scores and, therefore, the influence of these variables on the match score has to be modelled. Further, these measures themselves are influenced by many latent factors, such as the fabrication material used to generate fake fingerprints. We advance the state-of-the-art by proposing two Bayesian Belief Network (BBN) models that can utilize these measures effectively, by appropriately modelling the relationship between quality, liveness measure and match scores with the consideration of latent variables. We demonstrate the efficacy of the proposed models on the LivDet 2011 fingerprint spoof dataset.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123422409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-task ConvNet for blind face inpainting with application to face verification","authors":"Shu Zhang, R. He, Zhenan Sun, T. Tan","doi":"10.1109/ICB.2016.7550058","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550058","url":null,"abstract":"Face verification between ID photos and life photos (FVBIL) is gaining traction with the rapid development of the Internet. However, ID photos provided by the Chinese administration center are often corrupted with wavy lines to prevent misuse, which poses great difficulty to accurate FVBIL. Therefore, this paper tries to improve the verification performance by studying a new problem, i.e. blind face inpainting, where we aim at restoring clean face images from the corrupted ID photos. The term blind indicates that the locations of corruptions are not known in advance. We formulate blind face inpainting as a joint detection and reconstruction problem. A multi-task ConvNet is accordingly developed to facilitate end to end network training for accurate and fast inpainting. The ConvNet is used to (i) regress the residual values between the clean/corrupted ID photo pairs and (ii) predict the positions of residual regions. Moreover, to achieve better inpainting results, we employ a skip connection to fuse information in the intermediate layer. To enable training of our ConvNet, we collect a dataset of synthetic clean/corrupted ID photo pairs with 500 thousand samples from around 10 thousand individuals. Experiments demonstrate that our multi-task ConvNet achieves superior performance in terms of reconstruction errors, convergence speed and verification accuracy.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116341829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michalis Vrigkas, Christophoros Nikou, I. Kakadiaris
{"title":"Exploiting privileged information for facial expression recognition","authors":"Michalis Vrigkas, Christophoros Nikou, I. Kakadiaris","doi":"10.1109/ICB.2016.7550048","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550048","url":null,"abstract":"Most of the facial expression recognition methods consider that both training and testing data are equally distributed. As facial image sequences may contain information for heterogeneous sources, facial data may be asymmetrically distributed between training and testing, as it may be difficult to maintain the same quality and quantity of information. In this work, we present a novel classification method based on the learning using privileged information (LUPI) paradigm to address the problem of facial expression recognition. We introduce a probabilistic classification approach based on conditional random fields (CRFs) to indirectly propagate knowledge from privileged to regular feature space. Each feature space owns specific parameter settings, which are combined together through a Gaussian prior, to train the proposed t-CRF+ model and allow the different tasks to share parameters and improve classification performance. The proposed method is validated on two challenging and publicly available benchmarks on facial expression recognition and improved the state-of-the-art methods in the LUPI framework.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127171438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yong Qi, Ya Zhou, Chang Zhou, Xinran Hu, Xiaoming Hu
{"title":"3D feature array involved registration algorithm for multi-pose hand vein authentication","authors":"Yong Qi, Ya Zhou, Chang Zhou, Xinran Hu, Xiaoming Hu","doi":"10.1109/ICB.2016.7550061","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550061","url":null,"abstract":"Traditional hand vein recognition technology is usually based on 2D infrared images in which the hand vein patterns are unavoidable distorted with hand posture changes. Point cloud matching vein recognition based on Kernel Correlation method provides a new perspective to hand vein recognition but also suffers from posture change. In this paper, a 3D point cloud registration algorithm is proposed to improve the vein authentication under multi-pose. An improved Normal Distributions Transform algorithm is introduced and the coarse alignment based on 3D feature array is designed, which is capable of eliminating the influence of global coordinate shift between the point clouds. Experiments show that the 3D feature array involved registration algorithm can effectively improve the recognition rate. After the point cloud registration, the system can keep more than 90% recognition rate with the corresponding error rate is as low as 4% when the hand posture changes in the range ± 20 degrees.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126628297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}