2011 International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Latent fingerprint enhancement via robust orientation field estimation 基于鲁棒方向场估计的潜在指纹增强
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117482
Soweon Yoon, Jianjiang Feng, Anil K. Jain
{"title":"Latent fingerprint enhancement via robust orientation field estimation","authors":"Soweon Yoon, Jianjiang Feng, Anil K. Jain","doi":"10.1109/IJCB.2011.6117482","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117482","url":null,"abstract":"Latent fingerprints, or simply latents, have been considered as cardinal evidence for identifying and convicting criminals. The amount of information available for identification from latents is often limited due to their poor quality, unclear ridge structure and occlusion with complex background or even other latent prints. We propose a latent fingerprint enhancement algorithm, which expects manually marked region of interest (ROI) and singular points. The core of the proposed algorithm is a robust orientation field estimation algorithm for latents. Short-time Fourier transform is used to obtain multiple orientation elements in each image block. This is followed by a hypothesize-and-test paradigm based on randomized RANSAC, which generates a set of hypothesized orientation fields. Experimental results on NIST SD27 latent fingerprint database show that the matching performance of a commercial matcher is significantly improved by utilizing the enhanced latent fingerprints produced by the proposed algorithm.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123016673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 100
A comparative evaluation of iris and ocular recognition methods on challenging ocular images 虹膜和眼识别方法在挑战性眼图像上的比较评价
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117500
Vishnu Naresh Boddeti, J. Smereka, B. Kumar
{"title":"A comparative evaluation of iris and ocular recognition methods on challenging ocular images","authors":"Vishnu Naresh Boddeti, J. Smereka, B. Kumar","doi":"10.1109/IJCB.2011.6117500","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117500","url":null,"abstract":"Iris recognition is believed to offer excellent recognition rates for iris images acquired under controlled conditions. However, recognition rates degrade considerably when images exhibit impairments such as off-axis gaze, partial occlusions, specular reflections and out-of-focus and motion-induced blur. In this paper, we use the recently-available face and ocular challenge set (FOCS) to investigate the comparative recognition performance gains of using ocular images (i.e., iris regions as well as the surrounding peri-ocular regions) instead of just the iris regions. A new method for ocular recognition is presented and it is shown that use of ocular regions leads to better recognition rates than iris recognition on FOCS dataset. Another advantage of using ocular images for recognition is that it avoids the need for segmenting the iris images from their surrounding regions.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115167951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
3D to 2D fingerprints: Unrolling and distortion correction 3D到2D指纹:展开和失真校正
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117585
Qijun Zhao, Anil K. Jain, G. Abramovich
{"title":"3D to 2D fingerprints: Unrolling and distortion correction","authors":"Qijun Zhao, Anil K. Jain, G. Abramovich","doi":"10.1109/IJCB.2011.6117585","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117585","url":null,"abstract":"Touchless 3D fingerprint sensors can capture both 3D depth information and albedo images of the finger surface. Compared with 2D fingerprint images acquired by traditional contact-based fingerprint sensors, the 3D fingerprints are generally free from the distortion caused by non-uniform pressure and undesirable motion of the finger. Several unrolling algorithms have been proposed for virtual rolling of 3D fingerprints to obtain 2D equivalent fingerprints, so that they can be matched with the legacy 2D fingerprint databases. However, available unrolling algorithms do not consider the impact of distortion that is typically present in the legacy 2D fingerprint images. In this paper, we conduct a comparative study of representative unrolling algorithms and propose an effective approach to incorporate distortion into the unrolling process. The 3D fingerprint database was acquired by using a 3D fingerprint sensor being developed by the General Electric Global Research. By matching the 2D equivalent fingerprints with the corresponding 2D fingerprints collected with a commercial contact-based fingerprint sensor, we show that the compatibility between the 2D unrolled fingerprints and the traditional contact-based 2D fingerprints is improved after incorporating the distortion into the unrolling process.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134042239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Robust face recognition with class dependent factor analysis 基于类相关因子分析的鲁棒人脸识别
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117508
B. Tunç, Volkan Dagli, M. Gökmen
{"title":"Robust face recognition with class dependent factor analysis","authors":"B. Tunç, Volkan Dagli, M. Gökmen","doi":"10.1109/IJCB.2011.6117508","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117508","url":null,"abstract":"A general framework for face recognition under different variations such as illumination and facial expressions is proposed. The model utilizes the class information in a supervised manner to define separate manifolds for each class. Manifold embeddings are achieved by a nonlinear manifold learning technique. Inside each manifold, a mixture of Gaussians is designated to introduce a generative model. By this way, a novel connection between the manifold learning and probabilistic generative models is achieved. The proposed model learns system parameters in a probabilistic framework, allowing a Bayesian decision model. Experimental evaluations with face recognition under illumination changes and facial expressions were performed to realize the ability of the proposed model to handle different types of variations. Our recognition performances were comparable to state-of art results.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"63 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133321702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Robustness of multi-modal biometric verification systems under realistic spoofing attacks 多模态生物识别验证系统在真实欺骗攻击下的鲁棒性
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117474
B. Biggio, Z. Akhtar, G. Fumera, G. Marcialis, F. Roli
{"title":"Robustness of multi-modal biometric verification systems under realistic spoofing attacks","authors":"B. Biggio, Z. Akhtar, G. Fumera, G. Marcialis, F. Roli","doi":"10.1109/IJCB.2011.6117474","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117474","url":null,"abstract":"Recent works have shown that multi-modal biometric systems are not robust against spoofing attacks [12, 15, 13]. However, this conclusion has been obtained under the hypothesis of a “worst case” attack, where the attacker is able to replicate perfectly the genuine biometric traits. Aim of this paper is to analyse the robustness of some multi-modal verification systems, combining fingerprint and face biometrics, under realistic spoofing attacks, in order to investigate the validity of the results obtained under the worst-case attack assumption.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124741822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Fundamental statistics of relatively permanent pigmented or vascular skin marks for criminal and victim identification 用于罪犯和受害者识别的相对永久的色素或血管皮肤标记的基本统计
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117496
A. Nurhudatiana, A. Kong, Keyan Matinpour, Siu-Yeung Cho, N. Craft
{"title":"Fundamental statistics of relatively permanent pigmented or vascular skin marks for criminal and victim identification","authors":"A. Nurhudatiana, A. Kong, Keyan Matinpour, Siu-Yeung Cho, N. Craft","doi":"10.1109/IJCB.2011.6117496","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117496","url":null,"abstract":"Recent technological advances have allowed for a proliferation of digital images that may be involved in crimes. Using these images as evidence in legal cases like child pornography and masked gunmen can be challenging because usually the faces of the suspects are not visible. To perform personal identification in these images, we propose a biometric trait composed of a group of skin marks including, but not limited to, nevi, lentigines, cherry hemangiomas, and seborrheic keratoses. Due to their biological characteristics, we have grouped these as “Relatively Permanent Pigmented or Vascular Skin Marks,” abbreviated as RPPVSM. As statistical study of RPPVSM is essential before investigating their discriminative power, we present in this paper the fundamental statistics of RPPVSM. Back torso images were collected from 144 Caucasian, Asian, and Latino males, and a researcher trained in dermatology manually identified their RPPVSMs. The statistical results show that Caucasians tend to have more RPPVSMs than Asians and Latinos, and over 80 percent of middle to low density RPPVSM patterns are independently and uniformly distributed.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114182773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Difficult imaging covariates or difficult subjects? - An empirical investigation 困难的成像协变量还是困难的受试者?——实证调查
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117551
Jeffrey R. Paone, S. Biswas, G. Aggarwal, P. Flynn
{"title":"Difficult imaging covariates or difficult subjects? - An empirical investigation","authors":"Jeffrey R. Paone, S. Biswas, G. Aggarwal, P. Flynn","doi":"10.1109/IJCB.2011.6117551","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117551","url":null,"abstract":"The performance of face recognition algorithms is affected both by external factors and internal subject characteristics [1]. Reliably identifying these factors and understanding their behavior on performance can potentially serve two important goals - to predict the performance of the algorithms at novel deployment sites and to design appropriate acquisition environments at prospective sites to optimize performance. There have been a few recent efforts in this direction that focus on identifying factors that affect face recognition performance but there has been no extensive study regarding the consistency of the effects various factors have on algorithms when other covariates vary. To give an example, a smiling target image has been reported to be better than a neutral expression image, but is this true across all possible illumination conditions, head poses, gender, etc.? In this paper, we perform rigorous experiments to provide answers to such questions. Our investigation indicates that controlled lighting and smiling expression are the most favorable conditions that consistently give superior performance even when other factors are allowed to vary. We also observe that internal subject characterization using biometric menagerie-based classification shows very weak consistency when external conditions are allowed to vary.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116180885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Symmetric surface-feature based 3D face recognition for partial data 基于对称表面特征的局部数据三维人脸识别
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117539
D. Smeets, J. Keustermans, Jeroen Hermans, P. Claes, D. Vandermeulen, P. Suetens
{"title":"Symmetric surface-feature based 3D face recognition for partial data","authors":"D. Smeets, J. Keustermans, Jeroen Hermans, P. Claes, D. Vandermeulen, P. Suetens","doi":"10.1109/IJCB.2011.6117539","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117539","url":null,"abstract":"Since most 3D cameras cannot capture the complete 3D face, an important challenge in 3D face recognition is the comparison of two 3D facial surfaces with little or no overlap. In this paper, a local feature method is presented to tackle this challenge exploiting the symmetry of the human face. Features are located and described using an extension of SIFT for meshes (meshSIFT). As such, features are localized as extrema in the curvature scale space of the input mesh, and are described by concatenating histograms of shape indices and slant angles of the neighborhood. For 3D face scans with sufficient overlap, the number of matching meshSIFT features is a reliable measure for face recognition purposes. However, as the feature descriptor is not symmetrical, features on one face are not matched with their symmetrical counterpart on another face impeding their feasibility for comparison of face scans with limited or no (left-right) overlap. In order to alleviate this problem, facial symmetry could be used to increase the overlap between two face scans by mirroring one of both faces w.r.t. an arbitrary plane. As this would increase the computational demand, this paper proposes an efficient approach to describe the features of a mirrored face by mirroring the mesh-SIFT descriptors of the input face. The presented method is validated on the data of the “SHREC '11: Face Scans” contest, containing many partial scans. This resulted in a recognition rate of 98.6% and a mean average precision of 93.3%, clearly outperforming all other participants in the challenge.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125775715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fast and accurate biometric identification using score level indexing and fusion 使用分数水平索引和融合快速准确的生物特征识别
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117591
Takao Murakami, Kenta Takahashi
{"title":"Fast and accurate biometric identification using score level indexing and fusion","authors":"Takao Murakami, Kenta Takahashi","doi":"10.1109/IJCB.2011.6117591","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117591","url":null,"abstract":"Biometric identification provides a very convenient way to authenticate a user because it does not require the user to claim an identity. However, both the identification error rates and the response time increase almost in proportion to the number of enrollees. A technique which decreases both of them using only scores has the advantage that it can be applied to any kind of biometric system that outputs scores. In this paper, we propose such a technique by combining score level fusion and distance-based indexing. In order to reduce the retrieval error rate in multibiometric identification, our technique takes a strategy to select the template of the enrollee whose posterior probability of being identical to the claimant is the highest as a next to be matched. The experimental evaluation using the Biosecure DS2 dataset and the CASIA-FingerprintV5 showed that our technique significantly reduced the identification error rates while keeping down or even reducing the number of score calculations, compared to the unimodal biometrics.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125805363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Retina features based on vessel graph substructures 基于血管图子结构的视网膜特征
2011 International Joint Conference on Biometrics (IJCB) Pub Date : 2011-10-11 DOI: 10.1109/IJCB.2011.6117506
A. Arakala, Stephen A. Davis, K. Horadam
{"title":"Retina features based on vessel graph substructures","authors":"A. Arakala, Stephen A. Davis, K. Horadam","doi":"10.1109/IJCB.2011.6117506","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117506","url":null,"abstract":"We represent the retina vessel pattern as a spatial relational graph, and match features using error-correcting graph matching. We study the distinctiveness of the nodes (branching and crossing points) compared with that of the edges and other substructures (nodes of degree k, paths of length k). On a training set from the VARIA database, we show that as well as nodes, three other types of graph sub-structure completely or almost completely separate genuine from imposter comparisons. We show that combining nodes and edges can improve the separation distance. We identify two retina graph statistics, the edge-to-node ratio and the variance of the degree distribution, that have low correlation with node match score.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133432607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信