2020 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
How Does Gender Balance In Training Data Affect Face Recognition Accuracy? 训练数据中的性别平衡如何影响人脸识别的准确性?
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-02-07 DOI: 10.1109/IJCB48548.2020.9304924
Vítor Albiero, Kai Zhang, K. Bowyer
{"title":"How Does Gender Balance In Training Data Affect Face Recognition Accuracy?","authors":"Vítor Albiero, Kai Zhang, K. Bowyer","doi":"10.1109/IJCB48548.2020.9304924","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304924","url":null,"abstract":"Deep learning methods have greatly increased the accuracy of face recognition, but an old problem still persists: accuracy is usually higher for men than women. It is often speculated that lower accuracy for women is caused by under-representation in the training data. This work investigates female under-representation in the training data is truly the cause of lower accuracy for females on test data. Using a state-of-the-art deep CNN, three different loss functions, and two training datasets, we train each on seven subsets with different male/female ratios, totaling forty two trainings, that are tested on three different datasets. Results show that (1) gender balance in the training data does not translate into gender balance in the test accuracy, (2) the “gender gap” in test accuracy is not minimized by a gender-balanced training set, but by a training set with more male images than female images, and (3) training to minimize the accuracy gap does not result in highest female, male or average accuracy.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117102236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Identity Document to Selfie Face Matching Across Adolescence 青少年身份证件与自拍照人脸匹配
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2019-12-20 DOI: 10.1109/IJCB48548.2020.9304906
Vítor Albiero, Nisha Srinivas, Esteban Villalobos, Jorge Perez-Facuse, Robert Rosenthal, D. Mery, K. Ricanek, K. Bowyer
{"title":"Identity Document to Selfie Face Matching Across Adolescence","authors":"Vítor Albiero, Nisha Srinivas, Esteban Villalobos, Jorge Perez-Facuse, Robert Rosenthal, D. Mery, K. Ricanek, K. Bowyer","doi":"10.1109/IJCB48548.2020.9304906","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304906","url":null,"abstract":"Matching live images (“selfies”) to images from ID documents is a problem that can arise in various applications. A challenging instance of the problem arises when the face image on the ID document is from early adolescence and the live image is from later adolescence. We explore this problem using a private dataset called Chilean Young Adult (CHIYA) dataset, where we match live face images taken at age 18–19 to face images on scanned ID documents created at ages 9 to 18. State-of-the-art deep learning face matchers (e.g., ArcFace) have relatively poor accuracy for document-to-selfie face matching. To achieve higher accuracy, we fine-tune the best available open-source model with triplet loss for a few-shot learning. Experiments show that our approach achieves higher accuracy than the DocFace+ model recently developed for this problem. Our fine-tuned model was able to improve the true acceptance rate for the most difficult (largest age span) subset from 62.92% to 96.67% at a false acceptance rate of 0.01%. Our fine-tuned model is available for use by other researchers.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116215190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fingerprint Spoof Detection: Temporal Analysis of Image Sequence 指纹欺骗检测:图像序列的时间分析
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2019-12-17 DOI: 10.1109/IJCB48548.2020.9304921
T. Chugh, Anil K. Jain
{"title":"Fingerprint Spoof Detection: Temporal Analysis of Image Sequence","authors":"T. Chugh, Anil K. Jain","doi":"10.1109/IJCB48548.2020.9304921","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304921","url":null,"abstract":"We utilize the dynamics involved in the imaging of a fingerprint on a touch-based fingerprint reader, such as perspiration, changes in skin color (blanching), and skin distortion, to differentiate real fingers from spoof (fake) fingers. Specifically, we utilize a deep learning-based architecture (CNN-LSTM) trained end-to-end using sequences of minutiae-centered local patches extracted from ten color frames captured on a COTS fingerprint reader. A time-distributed CNN (MobileNet-v1) extracts spatial features from each local patch, while a bi-directional LSTM layer learns the temporal relationship between the patches in the sequence. Experimental results on a database of 26, 650 live frames from 685 subjects (1,333 unique fingers), and 32,910 spoof frames of 7 spoof materials (with a total of 14 material variants), show that the proposed approach exceeds the state-of-the-art performance in both known-material and cross-material (generalization) scenarios. For instance, the proposed approach improves the state-of-the-art cross-material performance from TDR of 81.65% to 86.20% @ FDR = 0.2%.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121351944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fingerprint Synthesis: Search with 100 Million Prints 指纹合成:搜索1亿个指纹
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2019-12-16 DOI: 10.1109/IJCB48548.2020.9304885
Vishesh Mistry, Joshua J. Engelsma, Anil K. Jain
{"title":"Fingerprint Synthesis: Search with 100 Million Prints","authors":"Vishesh Mistry, Joshua J. Engelsma, Anil K. Jain","doi":"10.1109/IJCB48548.2020.9304885","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304885","url":null,"abstract":"Evaluation of large-scale fingerprint search algorithms has been limited due to lack of publicly available datasets. To address this problem, we utilize a Generative Adversarial Network (GAN) to synthesize a fingerprint dataset consisting of 100 million fingerprint images. In contrast to existing fingerprint synthesis algorithms, we incorporate an identity loss which guides the generator to synthesize fingerprints corresponding to more distinct identities. The characteristics of our synthesized fingerprints are shown to be more similar to real fingerprints than existing meth- ods via eight different metrics (minutiae count - block and template, minutiae direction - block and template, minutiae convex hull area, minutiae spatial distribution, block minutiae quality distribution, and NFIQ 2.0 scores). Additionally, the synthetic fingerprints based on our approach are shown to be more distinct than synthetic fingerprints based on published methods through search results and imposter distribution statistics. Finally, we report for the first time in open literature, search accuracy against a gallery of 1 00 million fingerprints (NIST SD4 Rank-1 accuracy of 89.7%).","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126805644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
White-Box Evaluation of Fingerprint Matchers: Robustness to Minutiae Perturbations 指纹匹配器的白盒评估:对微小扰动的鲁棒性
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2019-09-02 DOI: 10.1109/IJCB48548.2020.9304918
Steven A. Grosz, Joshua J. Engelsma, N. Paulter, Anil K. Jain
{"title":"White-Box Evaluation of Fingerprint Matchers: Robustness to Minutiae Perturbations","authors":"Steven A. Grosz, Joshua J. Engelsma, N. Paulter, Anil K. Jain","doi":"10.1109/IJCB48548.2020.9304918","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304918","url":null,"abstract":"Prevailing evaluations of fingerprint recognition systems have been performed as end-to-end black-box tests of fingerprint identification or authentication accuracy. However, performance of the end-to-end system is subject to errors arising in any of its constituent modules, including: fingerprint scanning, preprocessing, feature extraction, and matching. Conversely, white-box evaluations provide a more granular evaluation by studying the individual subcomponents of a system. While a few studies have conducted stand-alone evaluations of the fingerprint reader and feature extraction modules of fingerprint recognition systems, little work has been devoted towards white-box evaluations of the fingerprint matching module. We report results of a controlled, white-box evaluation of one open-source and two commercial-off-the-shelf (COTS) minutiae-based matchers in terms of their robustness against controlled perturbations (random noise and non-linear distortions) introduced into the input minutiae feature sets. Our white-box evaluations reveal that the performance of fingerprint minutiae matchers are more susceptible to non-linear distortion and missing minutiae than spurious minutiae and small positional displacements of the minutiae locations.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126258213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
AdvFaces: Adversarial Face Synthesis AdvFaces:对抗性人脸合成
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2019-08-14 DOI: 10.1109/IJCB48548.2020.9304898
Debayan Deb, Jianbang Zhang, Anil K. Jain
{"title":"AdvFaces: Adversarial Face Synthesis","authors":"Debayan Deb, Jianbang Zhang, Anil K. Jain","doi":"10.1109/IJCB48548.2020.9304898","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304898","url":null,"abstract":"Face recognition systems have been shown to be vulnerable to adversarial faces resulting from adding small perturbations to probe images. Such adversarial images can lead state-of-the-art face matchers to falsely reject a genuine subject (obfuscation attack) or falsely match to an impostor (impersonation attack). Current approaches to crafting adversarial faces lack perceptual quality and take an unreasonable amount of time to generate them. We propose, AdvFaces, an automated adversarial face synthesis method that learns to generate minimal perturbations in the salient facial regions via Generative Adversarial Networks. Once AdvFaces is trained, a hacker can automatically generate imperceptible face perturbations that can evade four black-box state-of-the-art face matchers with attack success rates as high as 97.22% and 24.30% at 0.1 % False Accept Rate, for obfuscation and impersonation attacks, respectively.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"359 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115470839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations 基于目标权扰动的反向卷积神经网络
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2018-12-07 DOI: 10.1109/IJCB48548.2020.9304875
Jacob Dumford, W. Scheirer
{"title":"Backdooring Convolutional Neural Networks via Targeted Weight Perturbations","authors":"Jacob Dumford, W. Scheirer","doi":"10.1109/IJCB48548.2020.9304875","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304875","url":null,"abstract":"We present a new white-box backdoor attack that exploits a vulnerability of convolutional neural networks (CNNs). In particular, we examine the application of facial recognition. Deep learning techniques are at the top of the game for facial recognition, which means they have now been implemented in many production-level systems. Alarmingly, unlike other commercial technologies such as operating systems and network devices, deep learning-based facial recognition algorithms are not presently designed with security requirements or audited for security vulnerabilities before deployment. Given how young the technology is and how abstract many of the internal workings of these algorithms are, neural network-based facial recognition systems are prime targets for security breaches. As more and more of our personal information begins to be guarded by facial recognition (e.g., the iPhone X), exploring the security vulnerabilities of these systems from a penetration testing standpoint is crucial. Along these lines, we describe a general methodology for backdooring CNNs via targeted weight perturbations. Using a five-layer CNN and ResNet-50 as case studies, we show that an attacker is able to significantly increase the chance that inputs they supply will be falsely accepted by a CNN while simultaneously preserving the error rates for legitimate enrolled classes.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123032319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信