K. Raja, P. Wasnik, Ramachandra Raghavendra, C. Busch
{"title":"Robust face presentation attack detection on smartphones : An approach based on variable focus","authors":"K. Raja, P. Wasnik, Ramachandra Raghavendra, C. Busch","doi":"10.1109/BTAS.2017.8272753","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272753","url":null,"abstract":"Smartphone based facial biometric systems have been well used in many of the security applications starting from simple phone unlocking to secure banking applications. This work presents a new approach of exploring the intrinsic characteristics of the smartphone camera to capture a number of stack images in the depth-of-field. With the set of stack images obtained, we present a new feature-free and classifier-free approach to provide the presentation attack resistant face biometric system. With the entire system implemented on the smartphone, we demonstrate the applicability of the proposed scheme in obtaining a stack of images with varying focus to effectively determine the presentation attacks. We create a new database of 13250 images at different focal length to present a detailed analysis of vulnerability together with the evaluation of proposed scheme. An extensive evaluation of the newly created database comprising of 5 different Presentation Attack Instruments (PAI) has demonstrated an outstanding performance on all 5 PAI through proposed approach. With the set ofcomplementary benefits of proposed approach illustrated in this work, we deduce the robustness towards unseen 2D attacks.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121087894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using associative classification to authenticate mobile device users","authors":"T. Neal, D. Woodard","doi":"10.1109/BTAS.2017.8272684","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272684","url":null,"abstract":"Because passwords and personal identification numbers are easily forgotten, stolen, or reused on multiple accounts, the current norm for mobile device security is quickly becoming inefficient and inconvenient. Thus, manufacturers have worked to make physiological biometrics accessible to mobile device owners as improved security measures. While behavioral biometrics has yet to receive commercial attention, researchers have continued to consider these approaches as well. However, studies of interactive data are limited, and efforts which are aimed at improving the performance of such techniques remain relevant. Thus, this paper provides a performance analysis of application, Bluetooth, and Wi-Fi data collected from 189 subjects on a mobile device for user verification. Results indicate that user authentication can be achieved with up to 91% accuracy, demonstrating the effectiveness of associative classification as a feature extraction technique.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"43 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124969286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chris Murphy, Jiaju Huang, Daqing Hou, S. Schuckers
{"title":"Shared dataset on natural human-computer interaction to support continuous authentication research","authors":"Chris Murphy, Jiaju Huang, Daqing Hou, S. Schuckers","doi":"10.1109/BTAS.2017.8272738","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272738","url":null,"abstract":"Conventional one-stop authentication of a computer terminal takes place at a user's initial sign-on. In contrast, continuous authentication protects against the case where an intruder takes over an authenticated terminal or simply has access to sign-on credentials. Behavioral biometrics has had some success in providing continuous authentication without requiring additional hardware. However, further advancement requires benchmarking existing algorithms against large, shared datasets. To this end, we provide a novel large dataset that captures not only keystrokes, but also mouse events and active programs. Our dataset is collected using passive logging software to monitor user interactions with the mouse, keyboard, and software programs. Data was collected from 103 users in a completely uncontrolled, natural setting, over a time span of 2.5 years. We apply Gunetti & Picardi's algorithm, a state-of-the-art algorithm in free text keystroke dynamics, as an initial benchmarkfor the new dataset.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127930750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. H. Vareto, Samira Silva, F. Costa, W. R. Schwartz
{"title":"Towards open-set face recognition using hashing functions","authors":"R. H. Vareto, Samira Silva, F. Costa, W. R. Schwartz","doi":"10.1109/BTAS.2017.8272751","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272751","url":null,"abstract":"Face Recognition is one of the most relevant problems in computer vision as we consider its importance to areas such as surveillance, forensics and psychology. Furthermore, open-set face recognition has a large room for improvement since only few researchers have focused on it. In fact, a real-world recognition system has to cope with several unseen individuals and determine whether a given face image is associated with a subject registered in a gallery of known individuals. In this work, we combine hashing functions and classification methods to estimate when probe samples are known (i.e., belong to the gallery set). We carry out experiments with partial least squares and neural networks and show how response value histograms tend to behave for known and unknown individuals whenever we test a probe sample. In addition, we conduct experiments on FRGCv1, PubFig83 and VGGFace to show that our method continues effective regardless of the dataset difficulty.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133316512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conditional random fields incorporate convolutional neural networks for human eye sclera semantic segmentation","authors":"Russel Mesbah, B. McCane, S. Mills","doi":"10.1109/BTAS.2017.8272768","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272768","url":null,"abstract":"Sclera segmentation as an ocular biometric has been of an interest in a variety of security and medical applications. The current approaches mostly rely on handcrafted features which make the generalisation of the learnt hypothesis challenging encountering images taken from various angles, and in different visible light spectrums. Convolutional Neural Networks (CNNs) are capable of extracting the corresponding features automatically. Despite the fact that CNNs showed a remarkable performance in a variety of image semantic segmentations, the output can be noisy and less accurate particularly in object boundaries. To address this issue, we have used Conditional Random Fields (CRFs) to regulate the CNN outputs. The results of applying this technique to sclera segmentation dataset (SSERBC 2017) are comparable with the state of the art solutions.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"480 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123054720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accuracy evaluation of handwritten signature verification: Rethinking the random-skilled forgeries dichotomy","authors":"Javier Galbally, M. Gomez-Barrero, A. Ross","doi":"10.1109/BTAS.2017.8272711","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272711","url":null,"abstract":"Traditionally, the accuracy of signature verification systems has been evaluated following a protocol that considers two independent impostor scenarios: random forgeries and skilled forgeries. Although such an approach is not necessarily incorrect, it can lead to a misinterpretation of the results of the assessment process. Furthermore, such a full separation between both types of impostors may be unrealistic in many operational real-world applications. The current article discusses the soundness of the random-skilled impostor dichotomy and proposes complementary approaches to report the accuracy of signature verification systems, discussing their advantages and limitations.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"316 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113958766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nisha Srinivas, Ryan Tokola, A. Mikkilineni, I. Nookaew, M. Leuze, Chris Boehnen
{"title":"DNA2FACE: An approach to correlating 3D facial structure and DNA","authors":"Nisha Srinivas, Ryan Tokola, A. Mikkilineni, I. Nookaew, M. Leuze, Chris Boehnen","doi":"10.1109/BTAS.2017.8272746","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272746","url":null,"abstract":"In this paper we introduce the concept of correlating genetic variations in an individual's specific genetic code (DNA) and facial morphology. This is the first step in the research effort to estimate facial appearance from DNA samples, which is gaining momentum within intelligence, law enforcement and national security communities. The dataset for the study consisting of genetic data and 3D facial scans (phenotype) data was obtained through the FaceBase Consortium. The proposed approach has three main steps: phenotype feature extraction from 3D face images, genotype feature extraction from a DNA sample, and genome-wide association analysis to determine genetic variations that contribute to facial structure and appearance. Results indicate that there exist significant correlations between genetic information and facial structure. We have identified 30 single nucleotide polymorphisms (SNPs), i.e. genetic variations, that significantly contribute to facial structure and appearance. We conclude with a preliminary attempt at facial reconstruction from the genetic data and emphasize on the complexity of the problem and the challenges encountered.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122832353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finger vein image retrieval via affinity-preserving K-means hashing","authors":"Kun Su, Gongping Yang, Lu Yang, Yilong Yin","doi":"10.1109/BTAS.2017.8272720","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272720","url":null,"abstract":"Efficient identification of finger veins is still a challenging problem due to the increasing size of the finger vein database. Most leading finger vein image identification methods have high-dimensional real-valued features, which result in extremely high computation complexity. Hashing algorithms are extraordinary effective ways to facilitate finger vein image retrieval. Therefore, in this paper, we proposed a finger vein image retrieval scheme based on Affinity-Preserving K-means Hashing (APKMH) algorithm and bag of subspaces based image feature. At first, we represent finger vein image by Nonlinearly Sub-space Coding (NSC) method which can obtain the discriminative finger vein image features. Then the features space is partitioned into multiple subsegments. In each subsegment, we employ the APKMH algorithm, which can simultaneously construct the visual codebook by directly k-means clustering and encode the feature vector as the binary index of the codeword. Experimental results on a large fused finger vein dataset demonstrate that our hashing method outperforms the state-of-the-art finger vein retrieval methods.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123957306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing","authors":"Rui Shao, X. Lan, P. Yuen","doi":"10.1109/BTAS.2017.8272765","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272765","url":null,"abstract":"3D mask spoofing attack has been one of the main challenges in face recognition. A real face displays a different motion behaviour compared to a 3D mask spoof attempt, which is reflected by different facial dynamic textures. However, the different dynamic information usually exists in the subtle texture level, which cannot be fully differentiated by traditional hand-crafted texture-based methods. In this paper, we propose a novel method for 3D mask face anti-spoofing, namely deep convolutional dynamic texture learning, which learns robust dynamic texture information from fine-grained deep convolutional features. Moreover, channel-discriminability constraint is adaptively incorporated to weight the discriminability of feature channels in the learning process. Experiments on both public datasets validate that the proposed method achieves promising results under intra and cross dataset scenario.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126283746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning with time-frequency representation for pulse estimation from facial videos","authors":"G. Hsu, Arulmurugan Ambikapathi, Ming Chen","doi":"10.1109/BTAS.2017.8272721","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272721","url":null,"abstract":"Accurate pulse estimation is of pivotal importance in acquiring the critical physical conditions of human subjects under test, and facial video based pulse estimation approaches recently gained attention owing to their simplicity. In this work, we have endeavored to develop a novel deep learning approach as the core part for pulse (heart rate) estimation by using a common RGB camera. Our approach consists of four steps. We first begin by detecting the face and its landmarks, and thereby locate the required facial ROI. In Step 2, we extract the sample mean sequences of the R, G, and B channels from the facial ROI, and explore three processing schemes for noise removal and signal enhancement. In Step 3, the Short-Time Fourier Transform (STFT) is employed to build the 2D Time-Frequency Representations (TFRs) of the sequences. The 2D TFR enables the formulation of the pulse estimation as an image-based classification problem, which can be solved in Step 4 by a deep Con-volutional Neural Network (CNN). Our approach is one of the pioneering works for attempting real-time pulse estimation using a deep learning framework. We have developed a pulse database, called the Pulse from Face (PFF), and used it to train the CNN. The PFF database will be made publicly available to advance related research. When compared to state-of-the-art pulse estimation approaches on the standard MAHNOB-HCI database, the proposed approach has exhibited superior performance.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"604 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129215996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}