2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Fingerprint minutiae extraction using deep learning 利用深度学习提取指纹细节
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272678
L. N. Darlow, Benjamin Rosman
{"title":"Fingerprint minutiae extraction using deep learning","authors":"L. N. Darlow, Benjamin Rosman","doi":"10.1109/BTAS.2017.8272678","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272678","url":null,"abstract":"The high variability of fingerprint data (owing to, e.g., differences in quality, moisture conditions, and scanners) makes the task of minutiae extraction challenging, particularly when approached from a stance that relies on tunable algorithmic components, such as image enhancement. We pose minutiae extraction as a machine learning problem and propose a deep neural network — MENet, for Minutiae Extraction Network — to learn a data-driven representation of minutiae points. By using the existing capabilities of several minutiae extraction algorithms, we establish a voting scheme to construct training data, and so train MENet in an automated fashion on a large dataset for robustness and portability, thus eliminating the need for tedious manual data labelling. We present a post-processing procedure that determines precise minutiae locations from the output of MENet. We show that MENet performs favourably in comparisons against existing minutiae extractors.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126964385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
A method for 3D iris reconstruction from multiple 2D near-infrared images 基于多幅二维近红外图像的三维虹膜重建方法
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272735
Diego Bastias, C. Pérez, Daniel P. Benalcazar, K. Bowyer
{"title":"A method for 3D iris reconstruction from multiple 2D near-infrared images","authors":"Diego Bastias, C. Pérez, Daniel P. Benalcazar, K. Bowyer","doi":"10.1109/BTAS.2017.8272735","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272735","url":null,"abstract":"The need to verify identity has become an everyday experience for most people. Biometrics is the principal means for reliable identification of people. Although iris recognition is the most reliable current technique for biometric identification, it has limitations because only segments of the iris are available due to occlusions from the eyelids, eyelashes, specular highlights, etc. The goal of this research is to study iris reconstruction from several 2D near infrared iris images, adding depth information to iris recognition. We expect that adding depth information from the iris surface will make it possible to identify people from a smaller segment of the iris. We designed a sensor for 2D near-infrared iris image acquisition. The method follows a pre-processing stage with the goal of performing iris enhancement, eliminating occlusions, reflections and extreme gray-level values, ending in iris texture equalization. The last step is the 3D iris model reconstruction based on several 2D iris images acquired at different angles. Results from each stage are presented.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121513590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deep Dense Multi-level feature for partial high-resolution fingerprint matching 部分高分辨率指纹匹配的深度密集多层次特征
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272723
Fandong Zhang, Shiyuan Xin, Jufu Feng
{"title":"Deep Dense Multi-level feature for partial high-resolution fingerprint matching","authors":"Fandong Zhang, Shiyuan Xin, Jufu Feng","doi":"10.1109/BTAS.2017.8272723","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272723","url":null,"abstract":"Fingerprint sensors on mobile devices commonly have limited area, which results in partial fingerprints. Optical sensor can capture fingerprints at very high resolution (2000ppi) with abundant details like pores, incipients, etc. It is quite crucial to develop effective partial-to-partial high-resolution fingerprint matching algorithms. Existing fingerprint matching methods are mainly minutiae-based, with fusion of different levels of features. Their accuracy degrades significantly in our application due to minutiae insufficiency and detection error. In this paper, we propose a novel representation for partial high-resolution fingerprint, named Deep Dense Multi-level feature (DDM). We train a deep convolutional neural network that can extract discriminative features inside any local fingerprint block with certain size. We find that not only minutiae but most local blocks contain sufficient features. Moreover, we analyze DDM and find that it contains multi-level information. When utilizing DDM for partial-to-partial matching, we first extract features block by block through a fully convolutional network, next match the two sets of features pairwise exhaustively, and then select the bi-directional best matches to compute matching score. Experiments indicate that our method outperforms several state-of-the-art approaches.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121100356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Learning face similarity for re-identification from real surveillance video: A deep metric solution 从真实监控视频中学习人脸相似度进行再识别:一个深度度量解决方案
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272704
Pei Li, M. L. Prieto, P. Flynn, D. Mery
{"title":"Learning face similarity for re-identification from real surveillance video: A deep metric solution","authors":"Pei Li, M. L. Prieto, P. Flynn, D. Mery","doi":"10.1109/BTAS.2017.8272704","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272704","url":null,"abstract":"Person re-identification (ReID) is the task of automatically matching persons across surveillance cameras with location or time differences. Nearly all proposed ReID approaches exploit body features. Even if successfully captured in the scene, faces are often assumed to be unhelpful to the ReID process[3]. As cameras and surveillance systems improve, ‘Facial ReID’ approaches deserve attention. The following contributions are made in this work: 1) We describe a high-quality dataset for person re-identification featuring faces. This dataset was collected from a real surveillance network in a municipal rapid transit system, and includes the same people appearing in multiple sites at multiple times wearing different attire. 2) We employ new DNN architectures and patch matching techniques to handle face misalignment in quality regimes where landmarking fails. We further boost the performance by adopting the fully convolutional structure and spatial pyramid pooling (SPP).","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"143 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129486269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Convolutional neural network for age classification from smart-phone based ocular images 基于智能手机眼图像的卷积神经网络年龄分类
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272766
A. Rattani, N. Reddy, R. Derakhshani
{"title":"Convolutional neural network for age classification from smart-phone based ocular images","authors":"A. Rattani, N. Reddy, R. Derakhshani","doi":"10.1109/BTAS.2017.8272766","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272766","url":null,"abstract":"Automated age classification has drawn significant interest in numerous applications such as marketing, forensics, human-computer interaction, and age simulation. A number of studies have demonstrated that age can be automatically deduced from face images. However, few studies have explored the possibility of computational estimation of age information from other modalities such as fingerprint or ocular region. The main challenge in age classification is that age progression is person-specific which depends on many factors such as genetics, health conditions, life style, and stress level. In this paper, we investigate age classification from ocular images acquired using smart-phones. Age information, though not unique to the individual, can be combined along with ocular recognition system to improve authentication accuracy or invariance to the ageing effect. To this end, we propose a convolutional neural network (CNN) architecture for the task. We evaluate our proposed CNN model on the ocular crops of the recent large-scale Adience benchmark for gender and age classification captured using smart-phones. The obtained results establish a baseline for deep learning approaches for age classification from ocular images captured by smart-phones.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129561137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
KAZE features via fisher vector encoding for offline signature verification KAZE的特点是通过fisher矢量编码进行离线签名验证
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272676
Manabu Okawa
{"title":"KAZE features via fisher vector encoding for offline signature verification","authors":"Manabu Okawa","doi":"10.1109/BTAS.2017.8272676","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272676","url":null,"abstract":"The widespread use of handwritten signatures for identity authentication has resulted in a need for automated verification systems. However, there is still significant room for improvement in the performance of these automated systems when compared with the performance of human analysts, particularly forensic document examiners, under a wide range of conditions. Furthermore, even with recent techniques, obtaining as much information as possible from a limited number of samples still remains challenging. In this study, to tackle these challenges and to boost the discriminative power of offline signature verification, a new method using KAZE features based on the recent Fisher vector (FV) encoding is proposed. The adoption of a probabilistic visual vocabulary and higher-order statistics, both of which can encode detailed information about the distribution of KAZE features, provides us with a more precise spatial distribution of the characteristics for a writer. The experimental results on the public MCYT-75 dataset can be summarized as follows: 1) The proposed method improves performance compared to the recent vector of locally aggregated descriptors (VLAD)-based approach. 2) The use of principal component analysis (PCA)for the original FV can provide a more dimensionally compact vector without a significant loss in performance. 3) The proposed method provides much lower error rates than existing state-of-the-art offline signature verification systems when applied to the MCYT-75 dataset.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115056108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Age and gender classification using local appearance descriptors from facial components 使用面部成分的局部外观描述符进行年龄和性别分类
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272773
Fabiola Becerra-Riera, Heydi Mendez Vazquez, A. Morales-González, M. Tistarelli
{"title":"Age and gender classification using local appearance descriptors from facial components","authors":"Fabiola Becerra-Riera, Heydi Mendez Vazquez, A. Morales-González, M. Tistarelli","doi":"10.1109/BTAS.2017.8272773","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272773","url":null,"abstract":"Face analysis and recognition systems have shown to be a valuable tool for forensic examiners. Particularly, the automatic estimation of age and gender from face images, can be useful in a wide range of forensic applications. In this work we propose to use a local appearance descriptor in a component-based way, to classify age and gender from face images. We subdivide a face image into regions of interest based on automatically detected landmarks, and represent them by using Histograms of Oriented Gradient (HOG). The representations obtained from different face regions are feeded to Support Vector Machine (SVM) classifiers to estimate the age and gender of the person in the image. Experimental analysis show the good results of this component-based approach, and its additional benefits when face images are affected by occlusions.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114504970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A data driven in-air-handwriting biometric authentication system 一种数据驱动的空中手写生物识别认证系统
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272739
Duo Lu, Kai Xu, Dijiang Huang
{"title":"A data driven in-air-handwriting biometric authentication system","authors":"Duo Lu, Kai Xu, Dijiang Huang","doi":"10.1109/BTAS.2017.8272739","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272739","url":null,"abstract":"The gesture-based human-computer interface requires new user authentication technique because it does not have traditional input devices like keyboard and mouse. In this paper, we propose a new finger-gesture-based authentication method, where the in-air-handwriting of each user is captured by wearable inertial sensors. Our approach is featured with the utilization of both the content and the writing convention, which are proven to be essential for the user identification problem by the experiments. A support vector machine (SVM) classifier is built based on the features extracted from the hand motion signals. To quantitatively benchmark the proposed framework, we build a prototype system with a custom data glove device. The experiment result shows our system achieve a 0.1% equal error rate (EER) on a dataset containing 200 accounts that are created by 116 users. Compared to the existing gesture-based biometric authentication systems, the proposed method delivers a significant performance improvement.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133734106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Multi-Siamese networks to accurately match contactless to contact-based fingerprint images 多暹罗网络精确匹配非接触式指纹图像与接触式指纹图像
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272708
Chenhao Lin, Ajay Kumar
{"title":"Multi-Siamese networks to accurately match contactless to contact-based fingerprint images","authors":"Chenhao Lin, Ajay Kumar","doi":"10.1109/BTAS.2017.8272708","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272708","url":null,"abstract":"Contactless 2D fingerprint identification is more hygienic, and enables deformation free imaging for higher accuracy. Success of such emerging contactless fingerprint technologies requires advanced capabilities to accurately match such fingerprint images with the conventional fingerprint databases which have been developed and deployed in last two decades. Convolutional neural networks have shown remarkable success for the face recognition problem. However, there has been very few attempts to develop CNN-based methods to address challenges in fingerprint identification problems. This paper proposes a multi-Siamese CNN architecture for accurately matching contactless and contact-based fingerprint images. In addition to the fingerprint images, hand-crafted fingerprint features, e.g. minutiae and core point, are also incorporated into the proposed architecture. This multi-Siamese CNN is trained using the fingerprint images and extracted features. Therefore, a more robust deep fingerprint representation is formed from the concatenation of deep feature vectors generated from multi-networks. In order to demonstrate the effectiveness of the proposed approach, a publicly available database consisting of contact-based and respective contactless finger-prints is utilized. The experimental evaluations presented in this paper achieve outperforming results, over other CNN-based methods and the traditional fingerprint cross matching methods, and validate our approach.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"351 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122291923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A competition on generalized software-based face presentation attack detection in mobile scenarios 移动场景下基于广义软件的人脸表示攻击检测竞赛
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272758
Zinelabdine Boulkenafet, Jukka Komulainen, Z. Akhtar, A. Benlamoudi, Djamel Samai, Salah Eddine Bekhouche, A. Ouafi, F. Dornaika, A. Taleb-Ahmed, Le Qin, Fei Peng, L. Zhang, Min Long, Shruti Bhilare, Vivek Kanhangad, Artur Costa-Pazo, Esteban Vázquez-Fernández, Daniel Pérez-Cabo, J. J. Moreira-Perez, D. González-Jiménez, A. Mohammadi, Sushil K. Bhattacharjee, S. Marcel, S. Volkova, Y. Tang, N. Abe, L. Li, X. Feng, Z. Xia, X. Jiang, S. Liu, Rui Shao, P. Yuen, W. Almeida, F. Andalo, Rafael Padilha, Gabriel Bertocco, William Dias, Jacques Wainer, R. Torres, A. Rocha, M. A. Angeloni, G. Folego, Alan Godoy, A. Hadid
{"title":"A competition on generalized software-based face presentation attack detection in mobile scenarios","authors":"Zinelabdine Boulkenafet, Jukka Komulainen, Z. Akhtar, A. Benlamoudi, Djamel Samai, Salah Eddine Bekhouche, A. Ouafi, F. Dornaika, A. Taleb-Ahmed, Le Qin, Fei Peng, L. Zhang, Min Long, Shruti Bhilare, Vivek Kanhangad, Artur Costa-Pazo, Esteban Vázquez-Fernández, Daniel Pérez-Cabo, J. J. Moreira-Perez, D. González-Jiménez, A. Mohammadi, Sushil K. Bhattacharjee, S. Marcel, S. Volkova, Y. Tang, N. Abe, L. Li, X. Feng, Z. Xia, X. Jiang, S. Liu, Rui Shao, P. Yuen, W. Almeida, F. Andalo, Rafael Padilha, Gabriel Bertocco, William Dias, Jacques Wainer, R. Torres, A. Rocha, M. A. Angeloni, G. Folego, Alan Godoy, A. Hadid","doi":"10.1109/BTAS.2017.8272758","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272758","url":null,"abstract":"In recent years, software-based face presentation attack detection (PAD) methods have seen a great progress. However, most existing schemes are not able to generalize well in more realistic conditions. The objective of this competition is to evaluate and compare the generalization performances of mobile face PAD techniques under some real-world variations, including unseen input sensors, presentation attack instruments (PAI) and illumination conditions, on a larger scale OULU-NPU dataset using its standard evaluation protocols and metrics. Thirteen teams from academic and industrial institutions across the world participated in this competition. This time typical liveness detection based on physiological signs of life was totally discarded. Instead, every submitted system relies practically on some sort of feature representation extracted from the face and/or background regions using hand-crafted, learned or hybrid descriptors. Interesting results and findings are presented and discussed in this paper.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125436090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 125
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信