{"title":"Sensitivity of Age Estimation Systems to Demographic Factors and Image Quality: Achievements and Challenges","authors":"A. Akbari, Muhammad Awais, J. Kittler","doi":"10.1109/IJCB48548.2020.9304891","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304891","url":null,"abstract":"Recently, impressively growing efforts have been devoted to the challenging task of facial age estimation. The improvements in performance achieved by new algorithms are measured on several benchmarking test databases with different characteristics to check on consistency. While this is a valuable methodology in itself, a significant issue in the most age estimation related studies is that the reported results lack an assessment of intrinsic system uncertainty. Hence, a more in-depth view is required to examine the robustness of age estimation systems in different scenarios. The purpose of this paper is to conduct an evaluative and comparative analysis of different age estimation systems to identify trends, as well as the points of their critical vulnerability. In particular, we investigate four age estimation systems, including the online Microsoft service, two best state-of-the-art approaches advocated in the literature, as well as a novel age estimation algorithm. We analyse the effect of different internal and external factors, including gender, ethnicity, expression, makeup, illumination conditions, quality and resolution of the face images, on the performance of these age estimation systems. The goal of this sensitivity analysis is to provide the biometrics community with the insight and understanding of the critical subject-, camera- and environmental-based factors that affect the overall performance of the age estimation system under study.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121311458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meiling Fang, N. Damer, Florian Kirchbuchner, Arjan Kuijper
{"title":"Micro Stripes Analyses for Iris Presentation Attack Detection","authors":"Meiling Fang, N. Damer, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IJCB48548.2020.9304886","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304886","url":null,"abstract":"Iris recognition systems are vulnerable to the presentation attacks, such as textured contact lenses or printed images. In this paper, we propose a lightweight framework to detect iris presentation attacks by extracting multiple micro-stripes of expanded normalized iris textures. In this procedure, a standard iris segmentation is modified. For our Presentation Attack Detection (PAD) network to better model the classification problem, the segmented area is processed to provide lower dimensional input segments and a higher number of learning samples. Our proposed Micro Stripes Analyses (MSA) solution samples the segmented areas as individual stripes. Then, the majority vote makes the final classification decision of those micro-stripes. Experiments are demonstrated on five databases, where two databases (IIITD-WVU and Notre Dame) are from the LivDet-2017 Iris competition. An in-depth experimental evaluation of this framework reveals a superior performance compared with state-of-the-art (SoTA) algorithms. Moreover, our solution minimizes the confusion between textured (attack) and soft (bona fide) contact lens presentations.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115959132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"All-in-one “HairNet”: A Deep Neural Model for Joint Hair Segmentation and Characterization","authors":"D. Borza, E. Yaghoubi, J. Neves, Hugo Proença","doi":"10.1109/IJCB48548.2020.9304904","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304904","url":null,"abstract":"The hair appearance is among the most valuable soft biometric traits when performing human recognition at-a-distance. Even in degraded data, the hair's appearance is instinctively used by humans to distinguish between individuals. In this paper we propose a multi-task deep neural model capable of segmenting the hair region, while also inferring the hair color, shape and style, all from in-the-wild images. Our main contributions are two-fold: 1) the design of an all-in-one neural network, based on depthwise separable convolutions to extract the features; and 2) the use convolutional feature masking layer as an attention mechanism that enforces the analysis only within the ‘hair’ regions. In a conceptual perspective, the strength of our model is that the segmentation mask is used by the other tasks to perceive - at feature-map level - only the regions relevant to the attribute characterization task. This paradigm allows the network to analyze features from nonrectangular areas of the input data, which is particularly important, considering the irregularity of hair regions. Our experiments showed that the proposed approach reaches a hair segmentation performance comparable to the state-of-the-art, having as main advantage the fact of performing multiple levels of analysis in a single-shot paradigm.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127231503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Activity Analysis: Iterative Weak/Self-Supervised Learning Frameworks for Detecting Abnormal Events","authors":"Bruno Degardin, Hugo Proença","doi":"10.1109/IJCB48548.2020.9304905","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304905","url":null,"abstract":"Having observed the unsatisfactory state-of-the-art performance in detecting abnormal events, this paper describes an iterative self-supervised learning method for such purpose. The proposed solution is composed of two experts that - at each step - find the most confidently classified instances to augment the amount of data available for the next iteration. Our contributions are four-fold: 1) we describe the iterative learning framework composed of experts working in the weak/self-supervised paradigms and providing learning data to each other, with the novel instances being filtered by a Bayesian framework; 2) upon Sultani et al. [14]'s work, we suggest a novel term the loss function that spreads the scores in the unit interval and is important for the performance of the iterative framework; 3) we propose a late decision fusion scheme, in which an ensemble of Decision Trees learned from bootstrap samples fuses the scores of the top-3 methods, reducing the EER values about 20% over the state-of-the-art; and 4) we announce the “Fights” dataset, fully annotated at the frame level, that can be freely used by the research community. The code, details of the experimental protocols and the dataset are publicly available at http://github.com/DegardinBruno/.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134407333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Deep Learning for Fusion of Eye and Mouse Movement based User Authentication","authors":"Yudong Liu, Yusheng Jiang, John Devenere","doi":"10.1109/IJCB48548.2020.9304926","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304926","url":null,"abstract":"This paper presents a deep learning based user authentication system which aims to identify an individual using data gathered from a mouse and the user's eyes during computer use in a controlled environment. A stacked bidirectional and unidirectional Long Short-Term Memory Recurrent Neural Network (SBV-LSTM-RNN) is introduced to distinguish a legitimate user from impostors. As one of the few attempts of using fusion of mouse and eye movement for user authentication, the proposed system, when adopted on a small dataset, has shown promising improvement compared to a similar system where fusion of eye and mouse modalities and a traditional machine learning method are used.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123830358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avantika Singh, Chirag Vashist, Pratyush Gaurav, A. Nigam, Rameshwar Pratap
{"title":"IHashNet: Iris Hashing Network based on efficient multi-index hashing","authors":"Avantika Singh, Chirag Vashist, Pratyush Gaurav, A. Nigam, Rameshwar Pratap","doi":"10.1109/IJCB48548.2020.9304925","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304925","url":null,"abstract":"Massive biometric deployments are pervasive in today's world. But despite the high accuracy of biometric systems, their computational efficiency degrades drastically with an increase in the database size. Thus, it is essential to index them. Here, in this paper, we propose an iris indexing scheme using real-valued deep iris features binarized to iris bar codes (IBC) compatible with the indexing structure. Firstly, for extracting robust iris features, we have designed a network utilizing the domain knowledge of ordinal filtering and learning their nonlinear combinations. Later these real-valued features are binarized. Finally, for indexing the iris dataset, we have proposed a Mcom loss that can transform the binary feature into an improved feature compatible with the Multi-Index Hashing scheme. This Mcom loss function ensures the equal distribution of Hamming distance among all the contiguous disjoint sub-strings. To the best of our knowledge, this is the first work in the iris indexing domain that presents an end-to-end iris indexing structure. Experimental results on four datasets are presented to depict the efficacy of the proposed approach.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128079962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diversity Blocks for De-biasing Classification Models","authors":"Shruti Nagpal, Maneet Singh, Richa Singh, Mayank Vatsa","doi":"10.1109/IJCB48548.2020.9304931","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304931","url":null,"abstract":"Recent studies have highlighted a major caveat in various high performing automated systems for tasks such as facial analysis (e.g. gender prediction), object classification, and image to caption generation. Several of the existing systems have been shown to yield biased results towards or against a particular subgroup. The biased behavior exhibited by these models when deployed and used in a real world scenario presents with the challenge of automated systems being unfair. In this research, we propose a novel technique, diversity block, for de-biasing existing models without re-training them. The proposed technique requires small amount of training data and can be incorporated with an existing model for addressing the challenge of biased predictions. This is done by adding a diversity block and computing the prediction based on the scores of the original model and the diversity block in order to get a more confident and de-biased prediction. The efficacy of the proposed technique has been demonstrated on the task of gender prediction, along with an auxiliary case study on object classification.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123073212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Brief Literature Review and Survey of Adult Perceptions on Biometric Recognition for Infants and Toddlers","authors":"T. Neal, Ashok R. Patel","doi":"10.1109/IJCB48548.2020.9304868","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304868","url":null,"abstract":"Over the past decade, analyses of biometric recognition for infant and toddler identification have emerged. These efforts are critical since existing child identification programs are solely utilized in missing children cases; such programs are not employed in the wider spectrum of societal issues that child identification efforts could help to resolve, such as baby swapping in hospitals, illegal adoption, and inadequate vaccination tracking. As such, this paper provides a brief literature review on biometric recognition for infants and toddlers. We cover the range of potential applications for biometric identification of infant and toddler-aged children, along with current research findings, especially those involving fingerprint recognition due to the permanence of fingerprint features and the practicality of implementing fingerprint recognition systems for younger children. In addition, we investigate the acceptability of biometric technologies for infants and toddlers by conducting an online survey (N = 133), wherein we gather the opinions of adults on the utility of biometric systems for young children, how these systems might help solve societal issues, and problems users may face if such a system were available. Key results show that while over half of respondents are comfortable with a biometric system for infants and toddlers, and parents in particular are more likely to view biometric recognition useful for helping to resolve societal issues than non-parents, data storage, privacy, and the child's inability to provide consent on their own are common concerns.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132475047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Bolme, Nisha Srinivas, Joel Brogan, David Cornett
{"title":"Face Recognition Oak Ridge (FaRO): A Framework for Distributed and Scalable Biometrics Applications","authors":"D. Bolme, Nisha Srinivas, Joel Brogan, David Cornett","doi":"10.1109/IJCB48548.2020.9304933","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304933","url":null,"abstract":"The facial biometrics community has seen a recent abundance of high-accuracy facial analytic models become freely available. Although these models' capabilities in facial detection, landmark detection, attribute analysis, and recognition are ever-increasing, they aren't always straightforward to deploy in a real-world environment. In reality, the use of the field's ever growing collection of models is becoming exceedingly difficult as library dependencies update and deprecate. Researchers often encounter headaches when attempting to utilize multiple models requiring different or conflicting software packages. Face Recognition Oak Ridge (FaRO) is an open-source project designed to provide a highly modular, flexible framework for unifying facial analytic models through a compartmentalized plug-and-play paradigm built on top of the gRPC (Google Remote Procedure Call) protocol. FaRO's server-client architecture and flexible portability allows easy construction of modularized and heterogeneous face analysis pipelines, distributed over many machines with differing hardware and software resources. This paper outlines FaRO's architecture and current capabilities, along with some experiments in model testing and distributed scaling through FaRO.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122072880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FGAN: Fan-Shaped GAN for Racial Transformation","authors":"Jiancheng Ge, Weihong Deng, Mei Wang, Jiani Hu","doi":"10.1109/IJCB48548.2020.9304901","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304901","url":null,"abstract":"Racial bias in face recognition has recently been concerned by both general public and research community. Most face recognition systems have a strong bias in recognition accuracy for different races mainly because of the unbalanced ethnic distribution in their datasets. In this paper, we propose a novel generative adversarial network, which transfer the facial images of one race to corresponding images of other races, to facilitate the data augmentation to balance the ethnic distribution. Our approach can generate more realistic results and make the training process more stable than other image-to-image translation methods such as StarGAN and CycleGAN. Experiments results show the superiority of FGAN to the previous methods on the racial transformation task in terms of visual effects and quantitative results. Besides, we perform extensive experiments to show our data augmentation is beneficial to reduce the racial bias, improving the face recognition rate of non-Caucasian people. Finally, we show the possibility to generate the ethnic independent facial image by the average of various races.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122235364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}