2015 International Conference on Biometrics (ICB)最新文献

筛选
英文 中文
Swipe gesture based Continuous Authentication for mobile devices 基于滑动手势的移动设备连续认证
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139110
Soumik Mondal, Patrick A. H. Bours
{"title":"Swipe gesture based Continuous Authentication for mobile devices","authors":"Soumik Mondal, Patrick A. H. Bours","doi":"10.1109/ICB.2015.7139110","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139110","url":null,"abstract":"In this research, we investigated the performance of a continuous biometric authentication system for mobile devices under various different analysis techniques. We tested these on a publicly available swipe gestures database with 71 users, but the techniques can also be applied to other biometric modalities in a continuous setting. The best result obtained in this research is that (1) none of the 71 genuine users is lockout from the system; (2) for 68 users we require on average 4 swipe gestures to detect an imposter; (3) for the remaining 3 genuine users, on average 14 swipes are required while 4 impostors are not detected.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133854212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Fast and robust self-training beard/moustache detection and segmentation 快速和鲁棒的自我训练胡子/小胡子检测和分割
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139066
T. Le, Khoa Luu, M. Savvides
{"title":"Fast and robust self-training beard/moustache detection and segmentation","authors":"T. Le, Khoa Luu, M. Savvides","doi":"10.1109/ICB.2015.7139066","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139066","url":null,"abstract":"Facial hair detection and segmentation play an important role in forensic facial analysis. In this paper, we propose a fast, robust, fully automatic and self-training system for beard/moustache detection and segmentation in challenging facial images. In order to overcome the limitations of illumination, facial hair color and near-clear shaving, our facial hair detection self-learns a transformation vector to separate a hair class and a non-hair class from the testing image itself. A feature vector, consisting of Histogram of Gabor (HoG) and Histogram of Oriented Gradient of Gabor (HOGG) at different directions and frequencies, is proposed for both beard/moustache detection and segmentation in this paper. A feature-based segmentation is then proposed to segment the beard/moustache from a region on the face that is discovered to contain facial hair. Experimental results have demonstrated the robustness and effectiveness of our proposed system in detecting and segmenting facial hair in images drawn from three entire databases i.e. the Multiple Biometric Grand Challenge (MBGC) still face database, the NIST color Facial Recognition Technology FERET database and a large subset from Pinellas County database.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114359182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Combining view-based pose normalization and feature transform for cross-pose face recognition 结合基于视图的姿态归一化和特征变换的交叉姿态人脸识别
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139114
Hua Gao, H. K. Ekenel, R. Stiefelhagen
{"title":"Combining view-based pose normalization and feature transform for cross-pose face recognition","authors":"Hua Gao, H. K. Ekenel, R. Stiefelhagen","doi":"10.1109/ICB.2015.7139114","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139114","url":null,"abstract":"Automatic face recognition across large pose changes is still a challenging problem. Previous solutions apply a transform in image space or feature space for normalizing the pose mismatch. For feature transform, the feature vector extracted on a probe facial image is transferred to match the gallery condition with regression models. Usually, the regression models are learned from paired gallery-probe conditions, in which pose angles are known or accurately estimated. The solution based on image transform is able to handle continuous pose changes, yet the approach suffers from warping artifacts due to misalignment and self-occlusion. In this work, we propose a novel approach, which combines the advantage of both methods. The algorithm is able to handle continuous pose mismatch in gallery and probe set, mitigating the impact of inaccurate pose estimation in feature-transform-based method. We evaluate the proposed algorithm on the FERET face database, where the pose angles are roughly annotated. Experimental results show that our proposed method is superior to solely image/feature transform methods, especially when the pose angle difference is large.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123117225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Attribute preserved face de-identification 属性保留人脸去识别
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139096
Amin Jourabloo, Xi Yin, Xiaoming Liu
{"title":"Attribute preserved face de-identification","authors":"Amin Jourabloo, Xi Yin, Xiaoming Liu","doi":"10.1109/ICB.2015.7139096","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139096","url":null,"abstract":"In this paper, we recognize the need of de-identifying a face image while preserving a large set of facial attributes, which has not been explicitly studied before. We verify the underling assumption that different visual features are used for identification and attribute classification. As a result, the proposed approach jointly models face de-identification and attribute preservation in a unified optimization framework. Specifically, a face image is represented by the shape and appearance parameters of AAM. Motivated by k-Same, we select k images that share the most similar attributes with those of a test image. Instead of using the average of k images, adopted by k-Same methods, we formulate an objective function and use gradient descent to learn the optimal weights for fusing k images. Experimental results show that our proposed approach performs substantially better than the baseline method with a lower face recognition rate, while preserving more facial attributes.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121725922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
A biomechanical approach to iris normalization 虹膜归一化的生物力学方法
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139041
Inmaculada Tomeo-Reyes, A. Ross, A. Clark, V. Chandran
{"title":"A biomechanical approach to iris normalization","authors":"Inmaculada Tomeo-Reyes, A. Ross, A. Clark, V. Chandran","doi":"10.1109/ICB.2015.7139041","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139041","url":null,"abstract":"The richness of the iris texture and its variability across individuals make it a useful biometric trait for personal authentication. One of the key stages in classical iris recognition is the normalization process, where the annular iris region is mapped to a dimensionless pseudo-polar coordinate system. This process results in a rectangular structure that can be used to compensate for differences in scale and variations in pupil size. Most iris recognition methods in the literature adopt linear sampling in the radial and angular directions when performing iris normalization. In this paper, a biomechanical model of the iris is used to define a novel nonlinear normalization scheme that improves iris recognition accuracy under different degrees of pupil dilation. The proposed biomechanical model is used to predict the radial displacement of any point in the iris at a given dilation level, and this information is incorporated in the normalization process. Experimental results on the WVU pupil light reflex database (WVU-PLR) indicate the efficacy of the proposed technique, especially when matching iris images with large differences in pupil size.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126540330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Latent fingerprint match using Minutia Spherical Coordinate Code 潜在指纹匹配使用Minutia球面坐标代码
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139061
Fengde Zheng, Chunyu Yang
{"title":"Latent fingerprint match using Minutia Spherical Coordinate Code","authors":"Fengde Zheng, Chunyu Yang","doi":"10.1109/ICB.2015.7139061","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139061","url":null,"abstract":"This paper proposes a fingerprint match algorithm using Minutia Spherical Coordinate Code (MSCC). This algorithm is a modified version of Minutia Cylinder Code (MCC). The advantage of this algorithm is its compact feature representation. Binary vector of every minutia only needs 288 bits, while MCC needs 448 or 1792 bits. This algorithm also uses a greedy alignment approach which can rediscover minutiae pairs lost in original stage. Experiments on AFIS data and NIST special data27 demonstrate the effectiveness of the proposed approach. We compare this algorithm to MCC. The experiments show that MSCC has better matching accuracy than MCC. The average compressed feature size is 2.3 Kbytes, while the average compressed feature size of MCC is 4.84 Kbytes in NIST SD27.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129687668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Discriminative regularized metric learning for person re-identification 人再识别的判别正则度量学习
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139075
Venice Erin Liong, Yongxin Ge, Jiwen Lu
{"title":"Discriminative regularized metric learning for person re-identification","authors":"Venice Erin Liong, Yongxin Ge, Jiwen Lu","doi":"10.1109/ICB.2015.7139075","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139075","url":null,"abstract":"Person re-identification aims to match people across non-overlapping cameras, and recent advances have shown that metric learning is an effective technique for person re-identification. However, most existing metric learning methods suffer from the small sample size (SSS) problem due to the limited amount of labeled training samples. In this paper, we propose a new discriminative regularized metric learning (DRML) method for person re-identification. Specifically, we exploit discriminative information of training samples to regulate the eigenvalues of the intra-class and inter-class covariance matrices so that the distance metric estimated is less biased. Experimental results on three widely used datasets validate the effectiveness of our proposed method for person re-identification.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124533161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Exploring dorsal finger vein pattern for robust person recognition 探索手指背静脉模式对人的鲁棒识别
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139059
Ramachandra Raghavendra, C. Busch
{"title":"Exploring dorsal finger vein pattern for robust person recognition","authors":"Ramachandra Raghavendra, C. Busch","doi":"10.1109/ICB.2015.7139059","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139059","url":null,"abstract":"Finger vein based biometric recognition has increasingly generated interest amongst biometric researchers because of the accuracy, robustness and anti-spoofing propertie. Prior efforts the are documented in the finger vein biometrics literature have only investigated the ventral vein pattern that is formed on the lower part of the finger underneath the skin surface. This paper investigates a new finger vein biometric approach by exploring the vein pattern that is present in the dorsal finger region. Thus, the dorsal finger vein pattern can be used as an independent biometric characteristic useful for the recognition of the target subject. We presented a complete automated approach with the key steps of image capturing, Region of Interest (ROI) extraction, pre-processing to enhance the vein pattern, feature extraction and comparison. This paper also introduces a new database of dorsal finger vein patterns from 125 subjects that resulted in 500 unique fingers with 10 samples each that results in a total of 5000 dorsal finger vein samples. Extensive experiments carried out on our new dorsal finger vein database achieve promising accuracy and thereby provide new insights on this new biometric approach.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115860242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Band selection for Gabor feature based hyperspectral palmprint recognition 基于Gabor特征的高光谱掌纹识别波段选择
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139104
L. Shen, Ziyi Dai, Sen Jia, Meng Yang, Zhihui Lai, Shiqi Yu
{"title":"Band selection for Gabor feature based hyperspectral palmprint recognition","authors":"L. Shen, Ziyi Dai, Sen Jia, Meng Yang, Zhihui Lai, Shiqi Yu","doi":"10.1109/ICB.2015.7139104","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139104","url":null,"abstract":"Hyperspectral imaging has recently been introduced into face and palmprint recognition and is now drawing much attention of researchers in this area. Compared to simple 2D imaging technology, hyperspectral image can bring much more information. Due to its ablity to jointly explore the spatial-spectral domain, 3D Gabor wavelets have been successfully applied for hyperspectral palmprint recognition. In this approach, a set of 52 three-dimensional Gabor wavelets with different frequencies and orientations were designed and convolved with the cube to extract discriminative information in the joint spatial-spectral domain. However, there is also much redundancy among the hyperpecstral data, which makes the feature extraction computationally expensive. In this paper, we propose to use AP (affinity propagation) based clustering approach to select representative band images from available large data. As the number of bands has been greatly reduced, the feature extraction process can be efficiently speed up. Experimental results on the publicly available HK-PolyU hyperspectral palmprint database show that the proposed approach not only improves the efficiency, but also reduces the EER of 3D Gabor feature based method from 4% to 3.26%.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116292089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Palm region extraction for contactless palmprint recognition 面向非接触式掌纹识别的掌纹区域提取
2015 International Conference on Biometrics (ICB) Pub Date : 2015-05-19 DOI: 10.1109/ICB.2015.7139058
Koichi Ito, Takuto Sato, Shoichiro Aoyama, S. Sakai, Shusaku Yusa, T. Aoki
{"title":"Palm region extraction for contactless palmprint recognition","authors":"Koichi Ito, Takuto Sato, Shoichiro Aoyama, S. Sakai, Shusaku Yusa, T. Aoki","doi":"10.1109/ICB.2015.7139058","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139058","url":null,"abstract":"Palm region extraction is one of the most important processes in palmprint recognition, since the accuracy of extracted palm regions has a significant impact on recognition performance. Especially in contactless recognition systems, a palm region has to be extracted from a palm image by taking into consideration a variety of hand poses. Most conventional methods of palm region extraction assume that all the fingers are spread and a palm faces to a camera. This assumption forces users to locate his/her hand with limited pose and position, resulting in impairing the flexibility of the contactless palmprint recognition system. Addressing the above problem, this paper proposes a novel palm region extraction method robust against hand pose. Through a set of experiments using our databases which contains palm images with different hand pose and the public database, we demonstrate that the proposed method exhibits efficient performance compared with conventional methods.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124821490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信