2019 International Conference on Biometrics (ICB)最新文献

筛选
英文 中文
Gait-Based Age Estimation with Deep Convolutional Neural Network 基于步态的深度卷积神经网络年龄估计
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987240
Shaoxiong Zhang, Yunhong Wang, Annan Li
{"title":"Gait-Based Age Estimation with Deep Convolutional Neural Network","authors":"Shaoxiong Zhang, Yunhong Wang, Annan Li","doi":"10.1109/ICB45273.2019.8987240","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987240","url":null,"abstract":"Gait is a unique biometric identifier for its non-invasive and low-cooperative features. Gait-based attribute recognition can play a crucial role in a wide range of applications, such as intelligent surveillance and criminal retrieval. However, due to the lack of data, there are relatively few studies which apply deep convolutional neural networks on gait attribute recognition. In this study, with the new progress in public gait dataset, we proposed a deep convolutional neural network with multi-task learning for gait-based human age estimation. Gait energy images are directly fed into our model for age estimation while gender information is also integrated for improving the performance of age estimation. The experiments on large-scale OULP-Age dataset show that our model outperforms the state-of-the-art.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127403944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
End-to-End Protocols and Performance Metrics For Unconstrained Face Recognition 无约束人脸识别的端到端协议和性能指标
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987345
James A. Duncan, N. Kalka, Brianna Maze, Anil K. Jain
{"title":"End-to-End Protocols and Performance Metrics For Unconstrained Face Recognition","authors":"James A. Duncan, N. Kalka, Brianna Maze, Anil K. Jain","doi":"10.1109/ICB45273.2019.8987345","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987345","url":null,"abstract":"Face recognition algorithms have received substantial attention over the past decade resulting in significant performance improvements. Arguably, improvement can be attributed to the wide spread availability of large face training sets, GPU computing to train state-of-the-art deep learning algorithms, and curation of challenging test sets that continue to push the state-of-the-art. Traditionally, protocol design and algorithm evaluation have primarily focused on measuring performance of specific stages of the biometric pipeline (e.g., face detection, feature extraction, or recognition) and do not capture errors that may propagate from face input to identification output in an end-to-end (E2E) manner. In this paper, we address this problem by expanding upon the novel open-set E2E identification protocols created for the IARPA Janus program. In particular, we describe in detail the joint detection, tracking, clustering, and recognition protocols, introduce novel E2E performance metrics, and provide rigorous evaluation using the IARPA Janus Benchmark C (IJB-C) and S (IJB-S) datasets.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122685824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-sample Compression of Finger Vein Images using H.265 Video Coding 基于H.265视频编码的手指静脉图像多样本压缩
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987412
Kevin Schörgnhofer, Sami Dafir, A. Uhl
{"title":"Multi-sample Compression of Finger Vein Images using H.265 Video Coding","authors":"Kevin Schörgnhofer, Sami Dafir, A. Uhl","doi":"10.1109/ICB45273.2019.8987412","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987412","url":null,"abstract":"A new video-compression based approach extending traditional biometric sample data compression techniques is evaluated in the context of finger vein recognition. The proposed scheme is implemented in HEVC / H.265 in different settings and compared to (i) compressing each sample individually with JPEG2000 according to ISO/IEC 19794-9:2011 and to (ii) compressing each users’ data into an individual video file. Compression efficiency and implications on recognition accuracy are determined using 4 recognition schemes and 2 data sets, both based on publicly available data. Results obtained using the proposed approach are fairly stable across different recognition schemes and data sets and indicate a significant improvement over the current state of the art.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128572187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SeLENet: A Semi-Supervised Low Light Face Enhancement Method for Mobile Face Unlock SeLENet:一种用于移动人脸解锁的半监督低光人脸增强方法
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987344
Ha A. Le, I. Kakadiaris
{"title":"SeLENet: A Semi-Supervised Low Light Face Enhancement Method for Mobile Face Unlock","authors":"Ha A. Le, I. Kakadiaris","doi":"10.1109/ICB45273.2019.8987344","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987344","url":null,"abstract":"Facial recognition is becoming a standard feature on new smartphones. However, the face unlocking feature of devices using regular 2D camera sensors exhibits poor performance in low light environments. In this paper, we propose a semi-supervised low light face enhancement method to improve face verification performance on low light face images. The proposed method is a network with two components: decomposition and reconstruction. The decomposition component splits an input low light face image into face normals and face albedo, while the reconstruction component enhances and reconstructs the lighting condition of the input image using the spherical harmonic lighting coefficients of a direct ambient white light. The network is trained in a semi-supervised manner using both labeled synthetic data and unlabeled real data. Qualitative results demonstrate that the proposed method produces more realistic images than the state-of-the-art low light enhancement algorithms. Quantitative experiments confirm the effectiveness of our low light face enhancement method for face verification. By applying the proposed method, the gap of verification accuracy between extreme low light and neutral light face images is reduced from approximately 3% to 0.5%.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116731589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
BioPass-UFPB: a Novel Multibiometric Database biopass - fpb:一种新型的多生物特征数据库
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987313
A. Silva, H. Gomes, H. N. Oliveira, P. B. Lins, Diego F. S. Lima, L. Batista
{"title":"BioPass-UFPB: a Novel Multibiometric Database","authors":"A. Silva, H. Gomes, H. N. Oliveira, P. B. Lins, Diego F. S. Lima, L. Batista","doi":"10.1109/ICB45273.2019.8987313","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987313","url":null,"abstract":"The baseline of a new multibiometric database is presented. The database consists of hand-and-fingerprint images acquired under a controlled environment to produce 5184 × 3456 pixels (hand) and 800×750 with 500 dpi (finger) images. The BioPass-UFPB includes data from 100 individuals and data acquisition, setup and protocols are described, as well as population statistics. Verification and classification results are presented for the hand images in order to provide a baseline for other research projects using these data. BioPass-UFPB is publicly available for research purposes in a conscious effort to improve reproducibility in multimodal biometrics.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121699409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Universal Material Translator: Towards Spoof Fingerprint Generalization 通用材料翻译器:迈向欺骗指纹泛化
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987320
Rohit Gajawada, Additya Popli, T. Chugh, A. Namboodiri, Anil K. Jain
{"title":"Universal Material Translator: Towards Spoof Fingerprint Generalization","authors":"Rohit Gajawada, Additya Popli, T. Chugh, A. Namboodiri, Anil K. Jain","doi":"10.1109/ICB45273.2019.8987320","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987320","url":null,"abstract":"Spoof detectors are classifiers that are trained to distinguish spoof fingerprints from bonafide ones. However, state of the art spoof detectors do not generalize well on unseen spoof materials. This study proposes a style transfer based augmentation wrapper that can be used on any existing spoof detector and can dynamically improve the robustness of the spoof detection system on spoof materials for which we have very low data. Our method is an approach for synthesizing new spoof images from a few spoof examples that transfers the style or material properties of the spoof examples to the content of bonafide fingerprints to generate a larger number of examples to train the classifier on. We demonstrate the effectiveness of our approach on materials in the publicly available LivDet 2015 dataset and show that the proposed approach leads to robustness to fingerprint spoofs of the target material.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129968284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Adversarial Examples to Fool Iris Recognition Systems 欺骗虹膜识别系统的对抗性示例
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987389
Sobhan Soleymani, Ali Dabouei, J. Dawson, N. Nasrabadi
{"title":"Adversarial Examples to Fool Iris Recognition Systems","authors":"Sobhan Soleymani, Ali Dabouei, J. Dawson, N. Nasrabadi","doi":"10.1109/ICB45273.2019.8987389","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987389","url":null,"abstract":"Adversarial examples have recently proven to be able to fool deep learning methods by adding carefully crafted small perturbation to the input space image. In this paper, we study the possibility of generating adversarial examples for code-based iris recognition systems. Since generating adversarial examples requires back-propagation of the adversarial loss, conventional filter bank-based iris-code generation frameworks cannot be employed in such a setup. Therefore, to compensate for this shortcoming, we propose to train a deep auto-encoder surrogate network to mimic the conventional iris code generation procedure. This trained surrogate network is then deployed to generate the adversarial examples using the iterative gradient sign method algorithm [15]. We consider non-targeted and targeted attacks through three attack scenarios. Considering these attacks, we study the possibility of fooling an iris recognition system in white-box and black-box frameworks.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130951963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Merged Multi-CNN with Parameter Reduction for Face Attribute Estimation 融合参数约简的Multi-CNN人脸属性估计
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987397
Hiroya Kawai, Koichi Ito, T. Aoki
{"title":"Merged Multi-CNN with Parameter Reduction for Face Attribute Estimation","authors":"Hiroya Kawai, Koichi Ito, T. Aoki","doi":"10.1109/ICB45273.2019.8987397","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987397","url":null,"abstract":"This paper proposes a face attribute estimation method using Merged Multi-CNN (MM-CNN). The proposed method merges single-task CNNs into one CNN by adding merging points and reduces the number of parameters by removing the fully-connected layers. We also propose a new idea of reducing parameters of CNN called Convolutionalization for Parameter Reduction (CPR), which estimates attributes using only convolution layers, in other words, does not need any fully-connected layers to estimate attributes from extracted features. Through a set of experiments using the Celeb A and LFW-a datasets, we demonstrated that MM- CNN with CPR exhibits higher efficiency of face attribute estimation than conventional methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133340563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Conditional Perceptual Adversarial Variational Autoencoder for Age Progression and Regression on Child Face 儿童面部年龄进退的条件知觉对抗变分自编码器
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987410
Praveen Kumar Chandaliya, N. Nain
{"title":"Conditional Perceptual Adversarial Variational Autoencoder for Age Progression and Regression on Child Face","authors":"Praveen Kumar Chandaliya, N. Nain","doi":"10.1109/ICB45273.2019.8987410","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987410","url":null,"abstract":"Recent works have shown that Generative Adversarial Networks (GAN) and Variational Auto-Encoder (VAE) can construct synthetic images of remarkable visual fidelity. In this paper, we propose a novel architecture based on GAN and VAE with Perceptual loss termed as Conditional Perceptual Adversarial Variational Autoencoder (CPAVAE), a model for face aging and rejuvenation on children face. CPAVAE performs face aging and rejuvenation by learning manifold constrained with conditions such as age and gender, which allows it to preserve face identity. CPAVAE uses six networks; these networks are an Encoder (E) and Sampling (S) which maps the child face to latent vector, Generator (G) takes the latent vector z as input along with age conditioned vector and tries to reconstruct the input image, a perceptual loss network Φ, a pre-trained very deep convolution network, discriminator on the encoder (Dz) smoothen’s the age transformation, discriminator on the image (Dimg) forces the generator to produce human realistic images. Here D and E are based on Variational Auto-encoder (VAE) architecture, VGGNet is used as perceptual loss network (Ploss), Dz and Dimg are convolutional neural networks. We represent child face progression and regression on the Children Longitudinal Face(CLF) dataset containing 10752 faces images in the age group [0 : 20]. This dataset contains 6164 and 4588 images of boys and girls respectively.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128459554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Hyperspectral Band Selection for Face Recognition Based on a Structurally Sparsified Deep Convolutional Neural Networks 基于结构稀疏化深度卷积神经网络的人脸识别高光谱波段选择
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987360
Fariborz Taherkhani, J. Dawson, N. Nasrabadi
{"title":"Hyperspectral Band Selection for Face Recognition Based on a Structurally Sparsified Deep Convolutional Neural Networks","authors":"Fariborz Taherkhani, J. Dawson, N. Nasrabadi","doi":"10.1109/ICB45273.2019.8987360","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987360","url":null,"abstract":"Hyperspectral imaging systems collect and process information from specific wavelengths across the electromagnetic spectrum. The fusion of multi-spectral bands in the visible spectrum has been exploited to improve face recognition performance over conventional broad band face images. In this paper, we propose a new Convolutional Neural Network (CNN) framework which adopts a structural sparsity learning technique to select the optimal spectral bands to obtain the best face recognition performance over all of the spectral bands. Specifically, in this method, all the bands are fed to a CNN and the convolutional filters in the first layer of the CNN are then regularized by employing a group Lasso algorithm to zero out the redundant bands during the training of the network. Contrary to other methods which usually select the bands manually or in a greedy fashion, our method selects the optimal spectral bands automatically to achieve the best face recognition performance over all the spectral bands. Moreover, experimental results demonstrate that our method outperforms state of the art band selection methods for face recognition on several publicly-available hyperspectral face image datasets.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134457588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信