2016 International Conference on Biometrics (ICB)最新文献

筛选
英文 中文
Latent fingerprint segmentation based on linear density 基于线性密度的潜在指纹分割
2016 International Conference on Biometrics (ICB) Pub Date : 2016-08-25 DOI: 10.1109/ICB.2016.7550076
Shuxin Liu, Manhua Liu, Zongyuang Yang
{"title":"Latent fingerprint segmentation based on linear density","authors":"Shuxin Liu, Manhua Liu, Zongyuang Yang","doi":"10.1109/ICB.2016.7550076","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550076","url":null,"abstract":"Latent fingerprints are the finger skin impressions left at the criminal scene unintentionally, which are important evidence for law enforcement agencies to identify criminals. Most of latent fingerprint images are of poor quality with unclear ridge structure and various non-fingerprint patterns. Segmentation is an important processing step to separate the fingerprint foreground from the background for more accurate and efficient feature extraction and identification. Traditional fingerprint segmentation methods are based on the information of gradients and local properties, which is sensitive to noise. This paper proposes a latent fingerprint segmentation algorithm based on linear density. First, a total variation (TV) image model is used to decompose a latent image into the cartoon and texture components. The texture component consisting of the latent fingerprint is used for further processing while the cartoon component is removed as noise. Second, we propose to detect a set of line segments from the texture image and compute the linear density map which can characterize the interleaved ridge and valley structure well. Finally, a segmentation mask is generated by thresholding the linear density map. The proposed method is tested on NIST SD27 latent fingerprint database. Experimental results and comparisons demonstrate the effectiveness of the proposed method.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114441964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Experimental results on multi-modal fusion of EEG-based personal verification algorithms 基于脑电图的多模态融合个人验证算法实验结果
2016 International Conference on Biometrics (ICB) Pub Date : 2016-08-25 DOI: 10.1109/ICB.2016.7550080
M. Garau, M. Fraschini, Luca Didaci, G. Marcialis
{"title":"Experimental results on multi-modal fusion of EEG-based personal verification algorithms","authors":"M. Garau, M. Fraschini, Luca Didaci, G. Marcialis","doi":"10.1109/ICB.2016.7550080","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550080","url":null,"abstract":"Recently, the use of brain activity as biometric trait for automatic users recognition has been investigated. EEG (Electroencephalography) signal is more often used in the medical field for diagnostic purposes. However, early EEG studies adopted similar signal properties and processing tools to study individual distinctive characteristics. As a matter of fact, features related mostly to a single region of the scalp were used, thus losing information on possible links among brain areas. In this work we approached the investigation of the EEG signal as possible biometric by focusing on two recent methods based on functional connectivity, which, in contrast with previous approaches, tend to estimate the complex interactions between EEG signals by measuring the time-series statistical interdependence. Thanks to their potential complementary, we explored their fusion by feature-level and match score-level approaches. Experimental results have shown a performance improvement with respect to that of the individual systems.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134481747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Compression standards in finger vein recognition 手指静脉识别中的压缩标准
2016 International Conference on Biometrics (ICB) Pub Date : 2016-06-13 DOI: 10.1109/ICB.2016.7550046
Victoria Ablinger, C. Zenz, Jutta Hämmerle-Uhl, A. Uhl
{"title":"Compression standards in finger vein recognition","authors":"Victoria Ablinger, C. Zenz, Jutta Hämmerle-Uhl, A. Uhl","doi":"10.1109/ICB.2016.7550046","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550046","url":null,"abstract":"The impact of applying three ISO/IEC still image compression standards to finger vein recognition accuracy is assessed. While JPEG is not competitive for low bitrates, JPEG 2000 and JPEG XR turn out to perform well for different types of template generation techniques. PSNR as a generic measure for image quality is found not to be suited as a predictor for finger vein recognition performance under compression artifacts.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125175979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Textured detailed graph model for dorsal hand vein recognition: A holistic approach 手背静脉识别的纹理精细图形模型:一种整体方法
2016 International Conference on Biometrics (ICB) Pub Date : 2016-06-13 DOI: 10.1109/ICB.2016.7550047
Renke Zhang, Di Huang, Yunhong Wang
{"title":"Textured detailed graph model for dorsal hand vein recognition: A holistic approach","authors":"Renke Zhang, Di Huang, Yunhong Wang","doi":"10.1109/ICB.2016.7550047","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550047","url":null,"abstract":"Holistic- and local-based methods are two-pronged in dorsal hand vein recognition, and the latter ones have become dominant recently due to their advanced performance. In this paper, we propose a novel approach to dorsal hand vein recognition using a global graph model which takes both the texture and shape cues into account. We first extend the basic graph model consisting of the minutiae of the vein network and their connecting lines to a detailed one by increasing the number of vertices, describing the profile of the vein shape more accurately. We then append the holistic texture feature of the patch around each vertex, i.e. its PCA coefficients, to make the representation of the graph model more comprehensively. The above two steps significantly improve the discrimination of the graph model, and it reports the rank-one recognition rate of 98.82% on the NCUT dataset. This holistic result is comparable to the ones of most local based methods, demonstrating its effectiveness. Meanwhile, with local texture cues embedded, e.g. LBP, HOG, and Gabor, it further reaches the state of the art accuracy up to 99.22%, showing its good complementarity to local based methods.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129236997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Transferring deep representation for NIR-VIS heterogeneous face recognition NIR-VIS异构人脸识别的深度表征转移
2016 International Conference on Biometrics (ICB) Pub Date : 2016-06-13 DOI: 10.1109/ICB.2016.7550064
Xiaoxiang Liu, Lingxiao Song, Xiang Wu, T. Tan
{"title":"Transferring deep representation for NIR-VIS heterogeneous face recognition","authors":"Xiaoxiang Liu, Lingxiao Song, Xiang Wu, T. Tan","doi":"10.1109/ICB.2016.7550064","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550064","url":null,"abstract":"One task of heterogeneous face recognition is to match a near infrared (NIR) face image to a visible light (VIS) image. In practice, there are often a few pairwise NIR-VIS face images but it is easy to collect lots of VIS face images. Therefore, how to use these unpaired VIS images to improve the NIR-VIS recognition accuracy is an ongoing issue. This paper presents a deep TransfeR NIR-VIS heterogeneous facE recognition neTwork (TRIVET) for NIR-VIS face recognition. First, to utilize large numbers of unpaired VIS face images, we employ the deep convolutional neural network (CNN) with ordinal measures to learn discriminative models. The ordinal activation function (Max-Feature-Map) is used to select discriminative features and make the models robust and lighten. Second, we transfer these models to NIR-VIS domain by fine-tuning with two types of NIR-VIS triplet loss. The triplet loss not only reduces intra-class NIR-VIS variations but also augments the number of positive training sample pairs. It makes fine-tuning deep models on a small dataset possible. The proposed method achieves state-of-the-art recognition performance on the most challenging CASIA NIR-VIS 2.0 Face Database. It achieves a new record on rank-1 accuracy of 95.74% and verification rate of 91.03% at FAR=0.001. It cuts the error rate in comparison with the best accuracy [27] by 69%.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114355363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 118
User individuality based cost-sensitive learning: A case study in finger vein recognition 基于用户个性的代价敏感学习:指静脉识别的案例研究
2016 International Conference on Biometrics (ICB) Pub Date : 2016-06-13 DOI: 10.1109/ICB.2016.7550075
Lu Yang, Gongping Yang, Yilong Yin, Lizhen Zhou
{"title":"User individuality based cost-sensitive learning: A case study in finger vein recognition","authors":"Lu Yang, Gongping Yang, Yilong Yin, Lizhen Zhou","doi":"10.1109/ICB.2016.7550075","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550075","url":null,"abstract":"State-of-the-art cost-sensitive learning based techniques in biometrics ignore cost difference between users and determine the loss only based on the misrecognition category. In practice, this may not always hold and the user individuality may also affect the loss of misrecognition. For example, misrecognizing an imposter as an administrator can cause a much more serious loss than misrecognizing it as a normal user. At the same time, two administrators/normal users may have different probability to accept imposter. To confidently prevent the high-probability error, the cost of false acceptance for one user with a high probability should be larger than it for the other users. To make cost definition more reasonable and further lower misrecognition cost of a recognition system, we propose to incorporate the user individuality, i.e., user role and user gullibility, into the traditional cost-sensitive learning model through defining an improved object function. By employing the new model, we further develop a user role and gullibility based mckNN (rg-mckNN). Experimental results on finger vein databases demonstrate the effectiveness of the proposed method.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133293305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Discriminative Feature Adaptation for cross-domain facial expression recognition 基于特征自适应的跨域面部表情识别
2016 International Conference on Biometrics (ICB) Pub Date : 2016-06-13 DOI: 10.1109/ICB.2016.7550085
Ronghang Zhu, Gaoli Sang, Qijun Zhao
{"title":"Discriminative Feature Adaptation for cross-domain facial expression recognition","authors":"Ronghang Zhu, Gaoli Sang, Qijun Zhao","doi":"10.1109/ICB.2016.7550085","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550085","url":null,"abstract":"Facial expression recognition is an important problem in many face-related tasks, such as face recognition, face animation, affective computing and human-computer interface. Existing methods mostly assume that testing and training face images are captured under the same condition and from the same population. Such assumption is, however, not valid in real-world applications, where face images could be taken from varying domains due to different cameras, illuminations, or populations. Motivated by recent progresses in domain adaptation, this paper proposes an unsupervised domain adaptation method, called discriminative feature adaptation (DFA), which requires for training a set of labelled face images in the source domain and some additional unlabelled face images in the target domain. It seeks for a feature space to represent face images from different domains such that two objectives are fulfilled: (i) mismatches between the feature distributions of these face images are minimized, and (ii) features are discriminative among these face images with respect to their facial expressions. Compared with existing methods, the proposed method can more effectively adapt discriminative features for recognizing facial expressions in various domains. Evaluation experiments have been done on four public facial expression databases: CK+, JAFFE, PICS, and FEED. The results demonstrate the superior performance of the proposed method over competing methods.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122081824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Post-mortem human iris recognition 人死后虹膜识别
2016 International Conference on Biometrics (ICB) Pub Date : 2016-06-13 DOI: 10.1109/ICB.2016.7550073
Mateusz Trokielewicz, A. Czajka, P. Maciejewicz
{"title":"Post-mortem human iris recognition","authors":"Mateusz Trokielewicz, A. Czajka, P. Maciejewicz","doi":"10.1109/ICB.2016.7550073","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550073","url":null,"abstract":"This paper presents a unique analysis of post-mortem human iris recognition. Post-mortem human iris images were collected at the university mortuary in three sessions separated by approximately 11 hours, with the first session organized from 5 to 7 hours after demise. Analysis performed for four independent iris recognition methods shows that the common claim of the iris being useless for biometric identification soon after death is not entirely true. Since the pupil has a constant and neutral dilation after death (the so called “cadaveric position”), this makes the iris pattern perfectly visible from the standpoint of dilation. We found that more than 90% of irises are still correctly recognized when captured a few hours after death, and that serious iris deterioration begins approximately 22 hours later, since the recognition rate drops to a range of 13.3-73.3% (depending on the method used) when the cornea starts to be cloudy. There were only two failures to enroll (out of 104 images) observed for only a single method (out of four employed in this study). These findings show that the dynamics of post-mortem changes to the iris that are important for biometric identification are much more moderate than previously believed. To the best of our knowledge, this paper presents the first experimental study of how iris recognition works after death, and we hope that these preliminary findings will stimulate further research in this area.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126168197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Exploring complementary features for iris recognition on mobile devices 探索虹膜识别在移动设备上的互补功能
2016 International Conference on Biometrics (ICB) Pub Date : 2016-06-13 DOI: 10.1109/ICB.2016.7550079
Qi Zhang, Haiqing Li, Zhenan Sun, Zhaofeng He, T. Tan
{"title":"Exploring complementary features for iris recognition on mobile devices","authors":"Qi Zhang, Haiqing Li, Zhenan Sun, Zhaofeng He, T. Tan","doi":"10.1109/ICB.2016.7550079","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550079","url":null,"abstract":"Iris recognition on mobile devices is challenging due to a large number of low-quality iris images acquired in complex imaging conditions. Illumination variations, low resolution and serious noises reduce the distinctiveness of iris texture. This paper explores complementary features to improve the accuracy of iris recognition on mobile devices. Firstly, optimized ordinal measures (OMs) features are extracted to encode local iris texture. Afterwards, pairwise features are automatically learned to measure the correlation between two irises using the convolutional neural network (CNN). Finally, the selected OMs features and the learned pairwise features are fused at the score level. Experiments are performed on a newly constructed mobile iris database which contains 6000 images of 200 Asian subjects. Their iris images of left and right eyes are obtained simultaneously at varying standoff distances. Experimental results demonstrate OMs features and pairwise features are highly complementary and effective for iris recognition on mobile devices.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131025769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Towards resolution invariant face recognition in uncontrolled scenarios 非受控场景下的分辨率不变人脸识别
2016 International Conference on Biometrics (ICB) Pub Date : 2016-06-13 DOI: 10.1109/ICB.2016.7550087
Dan Zeng, Hu Chen, Qijun Zhao
{"title":"Towards resolution invariant face recognition in uncontrolled scenarios","authors":"Dan Zeng, Hu Chen, Qijun Zhao","doi":"10.1109/ICB.2016.7550087","DOIUrl":"https://doi.org/10.1109/ICB.2016.7550087","url":null,"abstract":"Face images captured by surveillance cameras usually have poor quality, particularly low resolution (LR), which affects the performance of face recognition seriously. In this paper, we develop a novel approach to address the problem of matching a LR face image against a gallery of relatively high resolution (HR) face images. Existing methods deal with such cross-resolution face recognition problem either by importing the information of HR images to help synthesize HR images from LR images or by applying the discrimination of HR images to help search for a unified feature space. Instead, we treat the discrimination information of HR and LR face images equally to boost the performance. The proposed approach learns resolution invariant features aiming to: (1) classify the identity of both LR and HR face images accurately, and (2) preserve the discriminative information among subjects across different resolutions. We conduct experiments on databases of uncontrolled scenarios, i.e., SCface and COX, and results show that the proposed approach significantly outperforms state-of-the-art methods.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127655477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信