J. Kittler, P. Koppen, P. Kopp, P. Huber, Matthias Rätsch
{"title":"Conformal Mapping of a 3D Face Representation onto a 2D Image for CNN Based Face Recognition","authors":"J. Kittler, P. Koppen, P. Kopp, P. Huber, Matthias Rätsch","doi":"10.1109/ICB2018.2018.00029","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00029","url":null,"abstract":"Fitting 3D Morphable Face Models (3DMM) to a 2D face image allows the separation of face shape from skin texture, as well as correction for face expression. However, the recovered 3D face representation is not readily amenable to processing by convolutional neural networks (CNN). We propose a conformal mapping from a 3D mesh to a 2D image, which makes these machine learning tools accessible by 3D face data. Experiments with a CNN based face recognition system designed using the proposed representation have been carried out to validate the advocated approach. The results obtained on standard benchmarking data sets show its promise.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114175050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multifactor User Authentication with In-Air-Handwriting and Hand Geometry","authors":"Duo Lu, Dijiang Huang, Yuli Deng, Adel Alshamrani","doi":"10.1109/ICB2018.2018.00046","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00046","url":null,"abstract":"On wearable and Virtual Reality (VR) platforms, user authentication is a basic function, but usually a keyboard or touchscreen cannot be provided to type a password. Hand gesture and especially in-air-handwriting can be potentially used for user authentication because a gesture input interface is readily available on these platforms. However, determining whether a login request is from the legitimate user based on a piece of hand movement is challenging in both signal processing and matching, which leads to limited performance in existing systems. In this paper, we propose a multifactor user authentication framework using both the motion signal of a piece of in-air-handwriting and the geometry of hand skeleton captured by a depth camera. To demonstrate this framework, we invented a signal matching algorithm, implemented a prototype, and conducted experiments on a dataset of 100 users collected by us. Our system achieves 0.6% Equal Error Rate (EER) without spoofing attack and 3.4% EER with spoofing only data, which is a significant improvement compared to existing systems using the Dynamic Time Warping (DTW) algorithm. In addition, we presented an in-depth analysis of the utilized features to explain the reason for the performance boost.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128587051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein
{"title":"SSBC 2018: Sclera Segmentation Benchmarking Competition","authors":"Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein","doi":"10.1109/BTAS.2015.7358796","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358796","url":null,"abstract":"This paper summarises the results of the Sclera Segmentation Benchmarking Competition (SSBC 2018). It was organised in the context of the 11th IAPR International Conference on Biometrics (ICB 2018). The aim of this competition was to record the developments on sclera segmentation in the cross-sensor environment (sclera trait captured using multiple acquiring sensors). Additionally, the competition also aimed to gain the attention of researchers on this subject of research. For the purpose of benchmarking, we have developed two datasets of sclera images captured using different sensors. The first dataset was collected using a DSLR camera and the second one was collected using a mobile phone camera. The first dataset is the Multi-Angle Sclera Dataset (MASD version 1), which was used in the context of the previous versions of sclera segmentation competitions. The images in the second dataset were captured using .a mobile phone rear camera of 8-megapixel. As baseline manual segmentation mask of the sclera images from both the datasets were developed. Precision and recall-based statistical measures were employed to evaluate the effectiveness of the submitted segmentation technique and to rank them. Six algorithms were submitted towards the segmentation task. This paper analyses the results produced by these algorithms/system and defines a way forward for this subject of research. Both the datasets along with some of the accompanying ground truth/baseline mask will be freely available for research purposes upon request to authors by email.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128353162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolutionary Methods for Generating Synthetic MasterPrint Templates: Dictionary Attack in Fingerprint Recognition","authors":"Aditi Roy, N. Memon, J. Togelius, A. Ross","doi":"10.1109/ICB2018.2018.00017","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00017","url":null,"abstract":"Recent research has demonstrated the possibility of generating \"Masterprints\" that can be used by an adversary to launch a dictionary attack against a fingerprint recognition system. Masterprints are fingerprint images that fortuitously match with a large number of other fingerprints thereby compromising the security of a fingerprint-based biometric system, especially those equipped with small-sized fingerprint sensors. This work presents new methods for creating a synthetic MasterPrint dictionary that sequentially maximizes the probability of matching a large number of target fingerprints. Three techniques, namely Covariance Matrix Adaptation Evolution Strategy (CMA-ES), Differential Evolution (DE) and Particle Swarm Optimization (PSO), are explored. Experiments carried out using a commercial fingerprint verification software, and public datasets, show that the proposed approaches performed quite well compared to the previously known MasterPrint generation methods.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133498030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-Stream Part-Based Deep Representation for Human Attribute Recognition","authors":"R. Anwer, F. Khan, Jorma T. Laaksonen","doi":"10.1109/ICB2018.2018.00024","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00024","url":null,"abstract":"Recognizing human attributes in unconstrained environments is a challenging computer vision problem. State-of-the-art approaches to human attribute recognition are based on convolutional neural networks (CNNs). The de facto practice when training these CNNs on a large labeled image dataset is to take RGB pixel values of an image as input to the network. In this work, we propose a two-stream part-based deep representation for human attribute classification. Besides the standard RGB stream, we train a deep network by using mapped coded images with explicit texture information, that complements the standard RGB deep model. To integrate human body parts knowledge, we employ the deformable part-based models together with our two-stream deep model. Experiments are performed on the challenging Human Attributes (HAT-27) Dataset consisting of 27 different human attributes. Our results clearly show that (a) the two-stream deep network provides consistent gain in performance over the standard RGB model and (b) that the attribute classification results are further improved with our two-stream part-based deep representations, leading to state-of-the-art results.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121939241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Mura, G. Orrú, Roberto Casula, A. Sibiriu, G. Loi, Pierluigi Tuveri, Luca Ghiani, G. Marcialis
{"title":"LivDet 2017 Fingerprint Liveness Detection Competition 2017","authors":"V. Mura, G. Orrú, Roberto Casula, A. Sibiriu, G. Loi, Pierluigi Tuveri, Luca Ghiani, G. Marcialis","doi":"10.1109/ICB2018.2018.00052","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00052","url":null,"abstract":"Fingerprint Presentation Attack Detection (FPAD) deals with distinguishing images coming from artificial replicas of the fingerprint characteristic, made up of materials like silicone, gelatine or latex, and images coming from alive fingerprints. Images are captured by modern scanners, typically relying on solid-state or optical technologies. Since from 2009, the Fingerprint Liveness Detection Competition (LivDet) aims to assess the performance of the state-of-the-art algorithms according to a rigorous experimental protocol and, at the same time, a simple overview of the basic achievements. The competition is open to all academics research centers and all companies that work in this field. The positive, increasing trend of the participants number, which supports the success of this initiative, is confirmed even this year: 17 algorithms were submitted to the competition, with a larger involvement of companies and academies. This means that the topic is relevant for both sides, and points out that a lot of work must be done in terms of fundamental and applied research.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116224199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhigang Yao, J. L. Bars, C. Charrier, C. Rosenberger
{"title":"Comparative Study of Digital Fingerprint Quality Assessment Metrics","authors":"Zhigang Yao, J. L. Bars, C. Charrier, C. Rosenberger","doi":"10.1109/ICB2018.2018.00014","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00014","url":null,"abstract":"The quality assessment of biometric data proceeds as a toll to decide whether a biometric sample may be used to generate the user's reference template. Many studies showed its signi?cant impact on the subsequent performance of the biometric system. Since many metrics are proposed for this purpose by researchers or standardization institutions, their relevance should be studied in particular to evaluate their relative usefulness. This paper provides a comparative study of fingerprint quality assessment (FQA) metrics. We consider the enrollment selection validation approach to perform an objective comparison of them. We show the efficiency of 7 well known FQA metrics on 9 datasets. Results show a dependency of these metrics to the dataset (i.e. the fingerprint sensor) and for matchers a similar behavior.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125174024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Boosting Face in Video Recognition via CNN Based Key Frame Extraction","authors":"Xuan Qi, Chen Liu, S. Schuckers","doi":"10.1109/ICB2018.2018.00030","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00030","url":null,"abstract":"Face in video recognition (FiVR) technology is widely applied in various fields such as video analytics and real-time video surveillance. However, FiVR technology also faces the challenges of high-volume video data, real-time processing requirement, as well as improving the performance of face recognition (FR) algorithms. To overcome these challenges, frame selection becomes a necessary and beneficial step before the FR stage. In this paper, we propose a CNN-based key-frame extraction (KFE) engine with GPU acceleration, employing our innovative Face Quality Assessment (FQA) module. For theoretical performance analysis of the KFE engine, we evaluated representative one-person video datasets such as PaSC, FiA and ChokePoint using ROC and DET curves. For performance analysis under practical scenario, we evaluated multi-person videos using ChokePoint dataset as well as in-house captured full-HD videos. The experimental results show that our KFE engine can dramatically reduce the data volume while improving the FR performance. In addition, our KFE engine can achieve higher than real-time performance with GPU acceleration in dealing with HD videos in real application scenarios.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127547313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosario Arjona, Miguel A. Prada-Delgado, I. Baturone, A. Ross
{"title":"Securing Minutia Cylinder Codes for Fingerprints through Physically Unclonable Functions: An Exploratory Study","authors":"Rosario Arjona, Miguel A. Prada-Delgado, I. Baturone, A. Ross","doi":"10.1109/ICB2018.2018.00019","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00019","url":null,"abstract":"A number of personal devices, such as smartphones, have incorporated fingerprint recognition solutions for user authentication purposes. This work proposes a dual-factor fingerprint matching scheme based on P-MCCs (Protected Minutia Cylinder-Codes) generated from fingerprint images and PUFs (Physically Unclonable Functions) generated from device SRAMs (Static Random Access Memories). Combining the fingerprint identifier with the device identifier results in a secure template satisfying the discriminability, irreversibility, revocability, and unlinkability properties, which are strongly desired for data privacy and security. Experiments convey the benefits of the proposed dual-factor authentication mechanism in enhancing the security of personal devices that utilize biometric authentication schemes.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129777534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving 2D Face Recognition via Discriminative Face Depth Estimation","authors":"Jiyun Cui, Hao Zhang, Hu Han, S. Shan, Xilin Chen","doi":"10.1109/ICB2018.2018.00031","DOIUrl":"https://doi.org/10.1109/ICB2018.2018.00031","url":null,"abstract":"As face recognition progresses from constrained scenarios to unconstrained scenarios, new challenges such as large pose, bad illumination, and partial occlusion, are encountered. While 3D or multi-modality RGB-D sensors are helpful for face recognition systems to achieve robustness against these challenges, the requirement of new sensors limits their application scenarios. In our paper, we propose a discriminative face depth estimation approach to improve 2D face recognition accuracies under unconstrained scenarios. Our discriminative depth estimation method uses a cascaded FCN and CNN architecture, in which FCN aims at recovering the depth from an RGB image, and CNN retains the separability of individual subjects. The estimated depth information is then used as a complementary modality to RGB for face recognition tasks. Experiments on two public datasets and a dataset we collect show that the proposed face recognition method using RGB and estimated depth information can achieve better accuracy than using RGB modality alone.","PeriodicalId":130957,"journal":{"name":"2018 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126693643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}