{"title":"Joint prototype and metric learning for set-to-set matching: Application to biometrics","authors":"Mengjun Leng, Panagiotis Moutafis, I. Kakadiaris","doi":"10.1109/BTAS.2015.7358771","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358771","url":null,"abstract":"In this paper, we focus on the problem of image set classification. Since existing methods utilize all available samples to model each image set, the corresponding time and storage requirements are high. Such methods are also susceptible to outliers. To address these challenges, we propose a method that jointly learns prototypes and a Mahalanobis distance. The prototypes learned represent the gallery image sets using fewer samples, while the classification accuracy is maintained or improved. The distance learned ensures that the notion of similarity between sets of images is reflected more accurately. Specifically, each gallery set is modeled as a hull spanned by the learned prototypes. The prototypes and distance metric are alternately updated using an iterative scheme. Experimental results using the YouTube Face, ETH-80, and Cambridge Hand Gesture datasets illustrate the improvements obtained.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129759070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Smartwatch-based biometric gait recognition","authors":"Andrew H. Johnston, Gary M. Weiss","doi":"10.1109/BTAS.2015.7358794","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358794","url":null,"abstract":"The advent of commercial smartwatches provides an intriguing new platform for mobile biometrics. Like their smartphone counterparts, these mobile devices can perform gait-based biometric identification because they too contain an accelerometer and a gyroscope. However, smartwatches have several advantages over smartphones for biometric identification because users almost always wear their watch in the same location and orientation. This location (i.e. the wrist) tends to provide more information about a user's movements than the most common location for smartphones (pockets or handbags). In this paper we show the feasibility of using smartwatches for gait-based biometrics by demonstrating the high levels of accuracy that can result from smartwatch-based identification and authentication models. Applications of smartwatch-based biometrics range from a new authentication challenge for use in a multifactor authentication system to automatic personalization by identifying the user of a shared device.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128225053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Latent fingerprint from multiple surfaces: Database and quality analysis","authors":"A. Sankaran, Akshay Agarwal, Rohit Keshari, Soumyadeep Ghosh, Anjali Sharma, Mayank Vatsa, Richa Singh","doi":"10.1109/BTAS.2015.7358773","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358773","url":null,"abstract":"Latent fingerprints are lifted from multiple types of surfaces, which vary in material type, texture, color, and shape. These differences in the surfaces introduce significant intra-class variations in the lifted prints such as availability of partial print, background noise, and poor ridge structure quality. Due to these observed variations, the overall quality and the matching performance of latent fingerprints vary with respect to surface properties. Thus, characterizing the performance of latent fingerprints according to the surfaces they are lifted from is an important research problem that needs attention. In this research, we create a novel multi-surface latent fingerprint database and make it publicly available for the research community. The database consists of 551 latent fingerprints from 51 subjects lifted from eight different surfaces. Using existing algorithms, we characterize the quality of latent fingerprints and compute the matching performance to analyze the effect of different surfaces.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"180 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132905524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvements to keystroke-based authentication by adding linguistic context","authors":"Adam Goodkind, David Guy Brizan, Andrew Rosenberg","doi":"10.1109/BTAS.2015.7358766","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358766","url":null,"abstract":"Traditional keystroke-based authentication methods rely on timing of and between individual keystrokes, oblivious to the context in which the typing is taking place. By incorporating linguistic context into a keystroke-based user authentication system, we are able to improve performance, as measured by EER. Taking advantage of patterns in keystroke dynamics, we show that typists employ unique behavior relative to syntactic and lexical constructs, which can be used to help identify the typist.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"30 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120845810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Sankaran, Aakarsh Malhotra, Apoorva Mittal, Mayank Vatsa, Richa Singh
{"title":"On smartphone camera based fingerphoto authentication","authors":"A. Sankaran, Aakarsh Malhotra, Apoorva Mittal, Mayank Vatsa, Richa Singh","doi":"10.1109/BTAS.2015.7358782","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358782","url":null,"abstract":"Authenticating fingerphoto images captured using a smartphone camera, provide a good alternate solution in place of traditional pin or pattern based approaches. There are multiple challenges associated with fingerphoto authentication such as background variations, environmental illumination, estimating finger position, and camera resolution. In this research, we propose a novel ScatNet feature based fingerphoto matching approach. Effective fingerphoto segmentation and enhancement are performed to aid the matching process and to attenuate the effect of capture variations. Further, we propose and create a publicly available smartphone fingerphoto database having three different subsets addressing the challenges of environmental illumination and background, along with their corresponding live scan fingerprints. Experimental results show improved performance across multiple challenges present in the database.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115356000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun-Cheng Chen, S. Sankaranarayanan, Vishal M. Patel, R. Chellappa
{"title":"Unconstrained face verification using fisher vectors computed from frontalized faces","authors":"Jun-Cheng Chen, S. Sankaranarayanan, Vishal M. Patel, R. Chellappa","doi":"10.1109/BTAS.2015.7358802","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358802","url":null,"abstract":"We present an algorithm for unconstrained face verification using Fisher vectors computed from frontalized off-frontal gallery and probe faces. In the training phase, we use the Labeled Faces in the Wild (LFW) dataset to learn the Fisher vector encoding and the joint Bayesian metric. Given an image containing the query face, we perform face detection and landmark localization followed by frontalization to normalize the effect of pose. We further extract dense SIFT features which are then encoded using the Fisher vector learnt during the training phase. The similarity scores are then computed using the learnt joint Bayesian metric. CMC curves and FAR/TAR numbers calculated for a subset of the IARPA JANUS challenge dataset are presented.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130775163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards repeatable, reproducible, and efficient biometric technology evaluations","authors":"Gregory Fiumara, W. Salamon, C. Watson","doi":"10.1109/BTAS.2015.7358800","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358800","url":null,"abstract":"With the proliferation of biometric-based identity management solutions, biometric algorithms need to be tested now more than ever. Independent biometric technology evaluations are needed to perform this testing, but are not trivial to run, as demonstrated by only a handful of organizations attempting to perform such a feat. Worse, many software development packages designed for running biometric technology evaluations available today shy away from techniques that enable automation, a concept that supports reproducible research. The evaluation software used for testing biometric recognition algorithms needs to efficiently scale as the sample datasets employed by researchers grow increasingly large. With better software, additional entities with their own biometric data collection repositories could easily administer a reproducible biometric technology evaluation. Existing evaluation software is available, but these packages do not always follow best practices and they are lacking several important features. This paper identifies the necessary requirements and ideal characteristics of a robust biometric evaluation toolkit and introduces our implementation thereof, which has been used in several large-scale biometric technology evaluations by multiple organizations.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123676907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Limbus impact removal for off-angle iris recognition using eye models","authors":"Osman M. Kurtuncu, M. Karakaya","doi":"10.1109/BTAS.2015.7358775","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358775","url":null,"abstract":"The traditional iris recognition algorithms segment the iris image at the cornea-sclera border as the outer boundary because they consider the visible portion of iris as the entire iris texture. However, limbus, an additional semitransparent eye structure at junction of the cornea and sclera, occludes iris textures at the sides that cannot be seen at the off-angle iris images. In the biometrics community, limbus occlusion is unnoticed due to its limited effect at frontal iris images. However, to ignore the effect of the limbus occlusion in off-angle iris images causes significant performance degradation in iris biometrics. In this paper, we first investigate the limbus impact on off-angle iris recognition. Then, we propose a new approach to remove the effect of limbus occlusion. In our approach, we segmented iris image at its actual outer iris boundary instead of the visible outer iris boundary as in traditional methods and normalize them based on the actual outer iris boundary. The invisible iris region in unwrapped image that is occluded by limbus is eliminated by including it into the mask. Based on the relation between the segmentation parameters of actual and visible iris boundaries, we generate a transfer function and estimate the actual iris boundary from the segmented visible iris boundary depending on the known limbus height and gaze angle. Moreover, based on experiments with the synthetic iris dataset from the biometric eye model, we first show that not only the acquisition angle but also the limbus height negatively affects the performance of the off-angle iris recognition and then we eliminate this negative effect with applying our proposed method.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124452239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Music and images as contexts in a context-aware touch-based authentication system","authors":"Abena Primo, V. Phoha","doi":"10.1109/BTAS.2015.7358779","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358779","url":null,"abstract":"Touch-based authentication is now a promising method for ensuring secure access on mobile devices with some researchers reporting very low EERs for authentication. However, these works have not shown experimentally the impact of contexts beyond phone orientation on touch-based authentication. In this work, we present experimental results on how touch-based authentication is impacted by users who are listening to music while swiping, users who are not listening to music while swiping, users who are swiping over images and users who are not swiping over images. We experiment with a data-set which we collected for this purpose from 34 subjects. Moreover, we provide design considerations towards a touch-based context-aware system and show how a module which considers the presence of music (which we found to have statistical significance) can be incorporated.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A robust sclera segmentation algorithm","authors":"P. Radu, J. Ferryman, Peter Wild","doi":"10.1109/BTAS.2015.7358746","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358746","url":null,"abstract":"Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm's robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities' space generated by the classifiers at the stage 1. The proposed method was ranked 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134523660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}