2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Score normalization in stratified biometric systems 分层生物识别系统的评分归一化
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272712
S. Tulyakov, Nishant Sankaran, S. Setlur, V. Govindaraju
{"title":"Score normalization in stratified biometric systems","authors":"S. Tulyakov, Nishant Sankaran, S. Setlur, V. Govindaraju","doi":"10.1109/BTAS.2017.8272712","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272712","url":null,"abstract":"Stratified biometric system can be defined as a system in which the subjects, their templates or matching scores can be separated into two or more categories, or strata, and the matching decisions can be made separately for each stratum. In this paper we investigate the properties of the strat-ifiedbiometric system and, in particular, possible strata creation strategies, score normalization and acceptance decisions, expected performance improvements due to stratification. We perform our experiments on face recognition matching scores from IARPA Janus CS2 dataset.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114971167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demography-based facial retouching detection using subclass supervised sparse autoencoder 基于子类监督稀疏自编码器的人口统计学面部修饰检测
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-09-22 DOI: 10.1109/BTAS.2017.8272732
Aparna Bharati, Mayank Vatsa, Richa Singh, K. Bowyer, Xin Tong
{"title":"Demography-based facial retouching detection using subclass supervised sparse autoencoder","authors":"Aparna Bharati, Mayank Vatsa, Richa Singh, K. Bowyer, Xin Tong","doi":"10.1109/BTAS.2017.8272732","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272732","url":null,"abstract":"Digital retouching of face images is becoming more widespread due to the introduction of software packages that automate the task. Several researchers have introduced algorithms to detect whether a face image is original or retouched. However, previous work on this topic has not considered whether or how accuracy of retouching detection varies with the demography of face images. In this paper, we introduce a new Multi-Demographic Retouched Faces (MDRF) dataset, which contains images belonging to two genders, male and female, and three ethnicities, Indian, Chinese, and Caucasian. Further, retouched images are created using two different retouching software packages. The second major contribution of this research is a novel semi-supervised autoencoder incorporating “sub-class” information to improve classification. The proposed approach outperforms existing state-of-the-art detection algorithms for the task of generalized retouching detection. Experiments conducted with multiple combinations of ethnicities show that accuracy of retouching detection can vary greatly based on the demographics of the training and testing images.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"462 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132591968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
FingerNet: An unified deep network for fingerprint minutiae extraction FingerNet:用于指纹细节提取的统一深度网络
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-09-07 DOI: 10.1109/BTAS.2017.8272688
Yao Tang, Fei Gao, Jufu Feng, Yuhang Liu
{"title":"FingerNet: An unified deep network for fingerprint minutiae extraction","authors":"Yao Tang, Fei Gao, Jufu Feng, Yuhang Liu","doi":"10.1109/BTAS.2017.8272688","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272688","url":null,"abstract":"Minutiae extraction is of critical importance in automated fingerprint recognition. Previous works on rolled/slap fingerprints failed on latent fingerprints due to noisy ridge patterns and complex background noises. In this paper, we propose a new way to design deep convolutional network combining domain knowledge and the representation ability of deep learning. In terms of orientation estimation, segmentation, enhancement and minutiae extraction, several typical traditional methods performed well on rolled/slap fingerprints are transformed into convolutional manners and integrated as an unified plain network. We demonstrate that this pipeline is equivalent to a shallow network with fixed weights. The network is then expanded to enhance its representation ability and the weights are released to learn complex background variance from data, while preserving end-to-end differentiability. Experimental results on NIST SD27 latent database and FVC 2004 slap database demonstrate that the proposed algorithm outperforms the state-of-the-art minutiae extraction algorithms. Code is made publicly available at: https://github.com/felixTY/FingerNet.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125575745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 126
Facial 3D model registration under occlusions with sensiblepoints-based reinforced hypothesis refinement 基于敏感点的面部三维模型配准
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-09-02 DOI: 10.1109/BTAS.2017.8272734
Yuhang Wu, I. Kakadiaris
{"title":"Facial 3D model registration under occlusions with sensiblepoints-based reinforced hypothesis refinement","authors":"Yuhang Wu, I. Kakadiaris","doi":"10.1109/BTAS.2017.8272734","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272734","url":null,"abstract":"Registering a 3D facial model to a 2D image under occlusion is difficult. First, not all of the detected facial landmarks are accurate under occlusions. Second, the number of reliable landmarks may not be enough to constrain the problem. We propose a method to synthesize additional points (Sensible Points) to create pose hypotheses. The visual clues extracted from the fiducial points, non-fiducial points, and facial contour are jointly employed to verify the hypotheses. We define a reward function to measure whether the projected dense 3D model is well-aligned with the confidence maps generated by two fully convolutional networks, and use the function to train recurrent policy networks to move the Sensible Points. The same reward function is employed in testing to select the best hypothesis from a candidate pool of hypotheses. Experimentation demonstrates that the proposed approach is very promising in solving the facial model registration problem under occlusion.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129349901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Subspace selection to suppress confounding source domain information in AAM transfer learning AAM迁移学习中抑制混杂源域信息的子空间选择
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-08-28 DOI: 10.1109/BTAS.2017.8272730
Azin Asgarian, A. Ashraf, David J. Fleet, B. Taati
{"title":"Subspace selection to suppress confounding source domain information in AAM transfer learning","authors":"Azin Asgarian, A. Ashraf, David J. Fleet, B. Taati","doi":"10.1109/BTAS.2017.8272730","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272730","url":null,"abstract":"Active appearance models (AAMs) have seen tremendous success in face analysis. However, model learning depends on the availability of detailed annotation of canonical landmark points. As a result, when accurate AAM fitting is required on a different set of variations (expression, pose, identity), a new dataset is collected and annotated. To overcome the need for time consuming data collection and annotation, transfer learning approaches have received recent attention. The goal is to transfer knowledge from previously available datasets (source) to a new dataset (target). We propose a subspace transfer learning method, in which we select a subspace from the source that best describes the target space. We propose a metric to compute the directional similarity between the source eigenvectors and the target subspace. We show an equivalence between this metric and the variance of target data when projected onto source eigenvectors. Using this equivalence, we select a subset of source principal directions that capture the variance in target data. To define our model, we augment the selected source subspace with the target subspace learned from a handful of target examples. In experiments done on six public datasets, we show that our approach outperforms the state of the art in terms of the RMS fitting error as well as the percentage of test examples for which AAM fitting converges to the ground truth.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130013782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The unconstrained ear recognition challenge 无约束耳识别挑战
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-08-23 DOI: 10.1109/BTAS.2017.8272761
Ž. Emeršič, Dejan Štepec, V. Štruc, P. Peer, Anjith George, Adil Ahmad, E. Omar, T. Boult, Reza Safdari, Yuxiang Zhou, S. Zafeiriou, Dogucan Yaman, Fevziye Irem Eyiokur, H. K. Ekenel
{"title":"The unconstrained ear recognition challenge","authors":"Ž. Emeršič, Dejan Štepec, V. Štruc, P. Peer, Anjith George, Adil Ahmad, E. Omar, T. Boult, Reza Safdari, Yuxiang Zhou, S. Zafeiriou, Dogucan Yaman, Fevziye Irem Eyiokur, H. K. Ekenel","doi":"10.1109/BTAS.2017.8272761","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272761","url":null,"abstract":"In this paper we present the results of the Unconstrained Ear Recognition Challenge (UERC), a group benchmarking effort centered around the problem of person recognition from ear images captured in uncontrolled conditions. The goal of the challenge was to assess the performance of existing ear recognition techniques on a challenging large-scale dataset and identify open problems that need to be addressed in the future. Five groups from three continents participated in the challenge and contributed six ear recognition techniques for the evaluation, while multiple baselines were made available for the challenge by the UERC organizers. A comprehensive analysis was conducted with all participating approaches addressing essential research questions pertaining to the sensitivity of the technology to head rotation, flipping, gallery size, large-scale recognition and others. The top performer of the UERC was found to ensure robust performance on a smaller part of the dataset (with 180 subjects) regardless of image characteristics, but still exhibited a significant performance drop when the entire dataset comprising 3,704 subjects was used for testing.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115117872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
FaceBoxes: A CPU real-time face detector with high accuracy FaceBoxes:一个高精度的CPU实时人脸检测器
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-08-17 DOI: 10.1109/BTAS.2017.8272675
Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Hailin Shi, Xiaobo Wang, S. Li
{"title":"FaceBoxes: A CPU real-time face detector with high accuracy","authors":"Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Hailin Shi, Xiaobo Wang, S. Li","doi":"10.1109/BTAS.2017.8272675","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272675","url":null,"abstract":"Although tremendous strides have been made in face detection, one of the remaining open challenges is to achieve real-time speed on the CPU as well as maintain high performance, since effective models for face detection tend to be computationally prohibitive. To address this challenge, we propose a novel face detector, named FaceBoxes, with superior performance on both speed and accuracy. Specifically, our method has a lightweight yet powerful network structure that consists of the Rapidly Digested Convolutional Layers (RDCL) and the Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable FaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the receptive fields and discretizing anchors over different layers to handle faces of various scales. Besides, we propose a new anchor densification strategy to make different types of anchors have the same density on the image, which significantly improves the recall rate of small faces. As a consequence, the proposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU for VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the number of faces. We comprehensively evaluate this method and present state-of-the-art detection performance on several face detection benchmark datasets, including the AFW, PASCAL face, and FDDB.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128667876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 230
Continuous user authentication via unlabeled phone movement patterns 通过未标记的手机移动模式进行连续用户认证
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-08-15 DOI: 10.1109/BTAS.2017.8272696
R. Kumar, P. P. Kundu, Diksha Shukla, V. Phoha
{"title":"Continuous user authentication via unlabeled phone movement patterns","authors":"R. Kumar, P. P. Kundu, Diksha Shukla, V. Phoha","doi":"10.1109/BTAS.2017.8272696","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272696","url":null,"abstract":"In this paper, we propose a novel continuous authentication system for smartphone users. The proposed system entirely relies on unlabeled phone movement patterns collected through smartphone accelerometer. The data was collected in a completely unconstrained environment over five to twelve days. The contexts of phone usage were identified using k-means clustering. Multiple profiles, one for each context, were created for every user. Five machine learning algorithms were employed for classification of genuine and impostors. The performance of the system was evaluated over a diverse population of 57 users. The mean equal error rates achieved by Logistic Regression, Neural Network, kNN, SVM, and Random Forest were 13.7%, 13.5%, 12.1%, 10.7%, and 5.6% respectively. A series of statistical tests were conducted to compare the performance of the classifiers. The suitability of the proposed system for different types of users was also investigated using the failure to enroll policy.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127667245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Generative adversarial network-based synthesis of visible faces from polarimetrie thermal faces 基于生成对抗网络的极化热面可见面合成
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-08-08 DOI: 10.1109/BTAS.2017.8272687
He Zhang, Vishal M. Patel, B. Riggan, Shuowen Hu
{"title":"Generative adversarial network-based synthesis of visible faces from polarimetrie thermal faces","authors":"He Zhang, Vishal M. Patel, B. Riggan, Shuowen Hu","doi":"10.1109/BTAS.2017.8272687","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272687","url":null,"abstract":"The large domain discrepancy between faces captured in polarimetric (or conventional) thermal and visible domain makes cross-domain face recognition quite a challenging problem for both human-examiners and computer vision algorithms. Previous approaches utilize a two-step procedure (visible feature estimation and visible image reconstruction) to synthesize the visible image given the corresponding polarimetric thermal image. However, these are regarded as two disjoint steps and hence may hinder the performance of visible face reconstruction. We argue that joint optimization would be a better way to reconstruct more photo-realistic images for both computer vision algorithms and human-examiners to examine. To this end, this paper proposes a Generative Adversarial Network-based Visible Face Synthesis (GAN-VFS) method to synthesize more photo-realistic visible face images from their corresponding polarimetric images. To ensure that the encoded visible-features contain more semantically meaningful information in reconstructing the visible face image, a guidance sub-network is involved into the training procedure. To achieve photo realistic property while preserving discriminative characteristics for the reconstructed outputs, an identity loss combined with the perceptual loss are optimized in the framework. Multiple experiments evaluated on different experimental protocols demonstrate that the proposed method achieves state-of-the-art performance.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"2155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128904094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Unconstrained Face Detection and Open-Set Face Recognition Challenge 无约束人脸检测和开放集人脸识别挑战
2017 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2017-08-08 DOI: 10.1109/BTAS.2017.8272759
Manuel Günther, Peiyun Hu, C. Herrmann, Chi-Ho Chan, Min Jiang, Shufan Yang, A. Dhamija, Deva Ramanan, J. Beyerer, J. Kittler, Mohamad Al Jazaery, Mohammad Iqbal Nouyed, G. Guo, Cezary Stankiewicz, T. Boult
{"title":"Unconstrained Face Detection and Open-Set Face Recognition Challenge","authors":"Manuel Günther, Peiyun Hu, C. Herrmann, Chi-Ho Chan, Min Jiang, Shufan Yang, A. Dhamija, Deva Ramanan, J. Beyerer, J. Kittler, Mohamad Al Jazaery, Mohammad Iqbal Nouyed, G. Guo, Cezary Stankiewicz, T. Boult","doi":"10.1109/BTAS.2017.8272759","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272759","url":null,"abstract":"Face detection and recognition benchmarks have shifted toward more difficult environments. The challenge presented in this paper addresses the next step in the direction of automatic detection and identification of people from outdoor surveillance cameras. While face detection has shown remarkable success in images collected from the web, surveillance cameras include more diverse occlusions, poses, weather conditions and image blur. Although face verification or closed-set face identification have surpassed human capabilities on some datasets, open-set identification is much more complex as it needs to reject both unknown identities and false accepts from the face detector. We show that unconstrained face detection can approach high detection rates albeit with moderate false accept rates. By contrast, open-set face recognition is currently weak and requires much more attention.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127989018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信