Avantika Singh, Ashish Arora, Shreyal Patel, Gaurav Jaswal, A. Nigam
{"title":"FDFNet: A Secure Cancelable Deep Finger Dorsal Template Generation Network Secured via. Bio-Hashing","authors":"Avantika Singh, Ashish Arora, Shreyal Patel, Gaurav Jaswal, A. Nigam","doi":"10.1109/ISBA.2019.8778520","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778520","url":null,"abstract":"Present world has already been consistently exploring the fine edges of online and digital world by imposing multiple challenging problems/scenarios. Similar to physical world, personal identity management is very crucial inorder to provide any secure online system. Last decade has seen a lot of work in this area using biometrics such as face, fingerprint, iris etc. Still there exist several vulnerabilities and one should have to address the problem of compromised biometrics much more seriously, since they cannot be modified easily once compromised. In this work, we have proposed a secure cancelable finger dorsal template generation network (learning domain specific features) secured via. Bio-Hashing. Proposed system effectively protects the original finger dorsal images by withdrawing compromised template and reassigning the new one. A novel Finger-Dorsal Feature Extraction Net (FDFNet) has been proposed for extracting the discriminative features. This network is exclusively trained on trait specific features without using any kind of pre-trained architecture. Later Bio-Hashing, a technique based on assigning a tokenized random number to each user, has been used to hash the features extracted from FDFNet. To test the performance of the proposed architecture, we have tested it over two benchmark public finger knuckle datasets: PolyU FKP and PolyU Contactless FKI. The experimental results shows the effectiveness of the proposed system in terms of security and accuracy.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129396998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spoofing PRNU Patterns of Iris Sensors while Preserving Iris Recognition","authors":"Sudipta Banerjee, Vahid Mirjalili, A. Ross","doi":"10.1109/ISBA.2019.8778483","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778483","url":null,"abstract":"The principle of Photo Response Non-Uniformity (PRNU) is used to link an image with its source, i.e., the sensor that produced it. In this work, we investigate if it is possible to modify an iris image acquired using one sensor in order to spoof the PRNU noise pattern of a different sensor. In this regard, we develop an image perturbation routine that iteratively modifies blocks of pixels in the original iris image such that its PRNU pattern approaches that of a target sensor. Experiments indicate the efficacy of the proposed perturbation method in spoofing PRNU patterns present in an iris image whilst still retaining its biometric content.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134032742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiaju Huang, Bryan Klee, Daniel Schuckers, Daqing Hou, S. Schuckers
{"title":"Removing Personally Identifiable Information from Shared Dataset for Keystroke Authentication Research","authors":"Jiaju Huang, Bryan Klee, Daniel Schuckers, Daqing Hou, S. Schuckers","doi":"10.1109/ISBA.2019.8778628","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778628","url":null,"abstract":"Research on keystroke dynamics has the good potential to offer continuous authentication that complements conventional authentication methods in combating insider threats and identity theft before more harm can be done to the genuine users. Unfortunately, the large amount of data required by free-text keystroke authentication often contain personally identifiable information, or PII, and personally sensitive information, such as a user’s first name and last name, username and password for an account, bank card numbers, and social security numbers. As a result, there are privacy risks associated with keystroke data that must be mitigated before they are shared with other researchers. We conduct a systematic study to remove PII’s from a recent large keystroke dataset. We find substantial amounts of PII’s from the dataset, including names, usernames and passwords, social security numbers, and bank card numbers, which, if leaked, may lead to various harms to the user, including personal embarrassment, blackmails, financial loss, and identity theft. We thoroughly evaluate the effectiveness of our detection program for each kind of PII. We demonstrate that our PII detection program can achieve near perfect recall at the expense of losing some useful information (lower precision). Finally, we demonstrate that the removal of PII’s from the original dataset has only negligible impact on the detection error tradeoff of the free-text authentication algorithm by Gunetti and Picardi. We hope that this experience report will be useful in informing the design of privacy removal in future keystroke dynamics based user authentication systems.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129198601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Tulyakov, Nishant Sankaran, S. Setlur, V. Govindaraju
{"title":"Utilizing Template Diversity for Fusion Of Face Recognizers","authors":"S. Tulyakov, Nishant Sankaran, S. Setlur, V. Govindaraju","doi":"10.1109/ISBA.2019.8778556","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778556","url":null,"abstract":"If multiple face images are available for the creation of person’s biometric template, some averaging method could be used to combine the feature vectors extracted from each image into a single template feature vector. Resulting average feature vector does not retain the information about image feature vector distribution. In this paper we consider the augmentation of such templates by the information about diversity of constituent face images, e.g. sample standard deviation of image feature vectors. We consider the theoretical model describing the conditions of the usefulness of template diversity measure, and see if such conditions hold in real life templates. We perform our experiments using IARPA face image datasets and deep CNN face recognizers.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115071746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Thermal to Visual Face Recognition using Transfer Learning","authors":"Yaswanth Gavini, B. Mehtre, A. Agarwal","doi":"10.1109/ISBA.2019.8778474","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778474","url":null,"abstract":"Inter-modality face recognition refers to the matching of face images between different modalities and is done usually by taking visual images as source and one of the other modalities as a target. Performing facial recognition between thermal to visual is a tough task because of nonlinear spectral characteristics of thermal and visual images. However, this is a desirable requirement for night-time security applications and military surveillance. In this paper, we propose a method to improve the thermal classifier accuracy by using transfer learning and as a result, the accuracy of thermal to visual face recognition gets increased. The proposed method is tested on RGB-D-T dataset (45900 images) and UND-Xl collection (4584 images). Experimental results show that the overall accuracy of thermal to visual face recognition by transferring the knowledge gets increased to 94.32% from 89.3% on RGB-D-T dataset and from 81.54% to 90.33% on UND-Xl dataset.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116627420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Convolutional Neural Network for Dot and Incipient Ridge Detection in High-resolution Fingerprints","authors":"V. Anand, Vivek Kanhangad","doi":"10.1109/ISBA.2019.8778527","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778527","url":null,"abstract":"Automated fingerprint recognition using partial and latent fingerprints employs level 3 features which provide additional information in the absence of sufficient number of level 1 and level 2 features. In this paper, we present a methodology for detecting two level 3 features namely, dots and incipient ridges. Specifically, we have designed a deep convolutional neural network which generates a dot map from the input fingerprint image. Subsequently, post-processing operations are performed on the obtained dot map to identify the coordinates of dots and incipient ridges. The results of our experiments on the publicly available PolyU HRF database demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126132321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhanced Segmentation-CNN based Finger-Vein Recognition by Joint Training with Automatically Generated and Manual Labels","authors":"Ehsaneddin Jalilian, A. Uhl","doi":"10.1109/ISBA.2019.8778522","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778522","url":null,"abstract":"Deep learning techniques are nowadays the leading approaches to solve complex machine learning and pattern recognition problems. For the first time, we utilize state-of-the-art semantic segmentation CNNs to extract vein patterns from near-infrared finger imagery and use them as the actual vein features in biometric finger-vein recognition. In this context, beside investigating the impact of training data volume, we propose a training model based on automatically generated labels, to improve the recognition performance of the resulting vein structures compared to (i) network training using manual labels only, and compared to (ii) well established classical recognition techniques relying on publicly available software. Proposing this model we also take a crucial step in reducing the amount of manually annotated labels required to train networks, whose generation is extremely time consuming and error-prone. As further contribution, we also release human annotated ground-truth vein pixel labels (required for training the networks) for a subset of a well known finger-vein database used in this work, and a corresponding tool for further annotations.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131440758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raghavendra Ramachandra, S. Venkatesh, K. Raja, C. Busch
{"title":"Towards making Morphing Attack Detection robust using hybrid Scale-Space Colour Texture Features","authors":"Raghavendra Ramachandra, S. Venkatesh, K. Raja, C. Busch","doi":"10.1109/ISBA.2019.8778488","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778488","url":null,"abstract":"The widespread use of face recognition algorithms, especially in Automatic Border Control (ABC) systems has raised concerns due to potential attacks. Face morphing combines more than one face images to generate a single image that can be used in the passport enrolment procedure. Such morphed passports have proven to be a significant threat to national security, as two or more individuals that contributed to the morphed reference image can use that single travel document. In this work, we present a novel method based on hybrid colour features to automatically detect morphed face images. The proposed method is based on exploring multiple colour spaces and scale-spaces using a Laplacian pyramid to extract robust features. The texture features corresponding to each scale-space in different color spaces are extracted with Local Binary Patterns (LBP) and classified using a Spectral Regression Kernel Discriminant Analysis (SRKDA) classifier. The scores are further fused using sum rule to detect the morphed face images. Experiments are carried out on a large-scale morphed face image database consisting of printed and scanned images to reflect the real-life passport issuance scenario. The evaluation database consists of images comprised of 1270 bona fide face images and 2515 morphed face images. The performance of the proposed method is compared with seven different deep learning and seven different non-deep learning methods, which has indicated the best performance of the proposed scheme with Bona fide Presentation Classification Error (BPCER) = 0.86% @ Attack Presentation Classification Error Rate (APCER) = 10% and BPCER = 7.59% @ APCER = 5%. The obtained results indicate the robustness in detecting morphing attacks as compared to earlier works.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132562765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subband Analysis for Performance Improvement of Replay Attack Detection in Speaker Verification Systems","authors":"S. Garg, Shruti Bhilare, Vivek Kanhangad","doi":"10.1109/ISBA.2019.8778535","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778535","url":null,"abstract":"Automatic speaker verification systems have been widely employed in a variety of commercial applications. However, advancements in the field of speech technology have equipped the attackers with sophisticated techniques for circumventing speaker verification systems. The state-of-the-art countermeasures are fairly successful in detecting speech synthesis and voice conversion attacks. However, the problem of replay attack detection has not received much attention from the researchers. In this study, we perform subband analysis on constant-Q cepstral coefficient (CQCC) and mel-frequency cepstral coefficient (MFCC) features to improve the performance of replay attack detection. We have performed experiments on the ASVspoof 2017 database which consists of 3566 genuine and 15380 replay utterances. Our experimental results suggest that the features extracted from the high frequency band carries significant discriminatory information for replay attack detection. In particular, our approach achieves an improvement of 36.33% over the baseline replay attack detection method in terms of equal error rate.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122332399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}