2019 International Conference on Biometrics (ICB)最新文献

筛选
英文 中文
Likelihood Ratio based Loss to finetune CNNs for Very Low Resolution Face Verification 基于似然比损失的极低分辨率人脸验证微调cnn
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987249
Dan Zeng, R. Veldhuis, L. Spreeuwers, Qijun Zhao
{"title":"Likelihood Ratio based Loss to finetune CNNs for Very Low Resolution Face Verification","authors":"Dan Zeng, R. Veldhuis, L. Spreeuwers, Qijun Zhao","doi":"10.1109/ICB45273.2019.8987249","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987249","url":null,"abstract":"In this paper, we propose a likelihood ratio based loss for very low-resolution face verification. Existing loss functions either improve the softmax loss to learn large-margin facial features or impose Euclidean margin constraints between image pairs. These methods are proved to be better than traditional softmax, but fail to guarantee the best discrimination features. Therefore, we propose a loss function based on likelihood ratio classifier, an optimal classifier in Neyman-Pearson sense, to give the highest verification rate at a given false accept rate, which is suitable for biometrics verification. To verify the efficacy of the proposed loss function, we apply it to address the very low-resolution face recognition problem. We conduct extensive experiments on the challenging SCface dataset with the resolution of the faces to be recognized below 16 × 16. The results show that the proposed approach outperforms state-of-the-art methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"42 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120887672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
OU-ISIR Wearable Sensor-based Gait Challenge: Age and Gender 基于OU-ISIR可穿戴传感器的步态挑战:年龄和性别
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987235
T. N. Thanh, Yuichi Hattori, Md. Atiqur Rahman Ahad, Anindya Das Antar, Masud Ahmed, D. Muramatsu, Yasushi Makihara, Y. Yagi, Sozo Inoue, Tahera Hossain
{"title":"OU-ISIR Wearable Sensor-based Gait Challenge: Age and Gender","authors":"T. N. Thanh, Yuichi Hattori, Md. Atiqur Rahman Ahad, Anindya Das Antar, Masud Ahmed, D. Muramatsu, Yasushi Makihara, Y. Yagi, Sozo Inoue, Tahera Hossain","doi":"10.1109/ICB45273.2019.8987235","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987235","url":null,"abstract":"Recently, wearable computing resources, such as smart-phones, are developing fast due to the advancements of technology and their great supports to human life. People are using smartphone for communication, work, entertainment, business, traveling, and browsing information. However, the health-care application is very limited due to many challenges. We would like to break the limitation and boost up the research to support human health. One of the important steps for a health-care system is to understand age and gender of the user through gait, who is wearing the sensor. Gait is chosen because it is the most dominant daily activity, which is considered to contain not only identity but also physical, medical conditions. To this end, we organize a challenging competition on gender and age prediction using wearable sensors. The evaluation is mainly based on the published OU-ISIR inertial dataset which is currently the world largest inertial gait dataset*.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127402624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A novel scheme to address the fusion uncertainty in multi-modal continuous authentication schemes on mobile devices 一种解决移动设备上多模态连续认证方案中融合不确定性的新方案
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987390
Max Smith-Creasey, M. Rajarajan
{"title":"A novel scheme to address the fusion uncertainty in multi-modal continuous authentication schemes on mobile devices","authors":"Max Smith-Creasey, M. Rajarajan","doi":"10.1109/ICB45273.2019.8987390","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987390","url":null,"abstract":"Interest in continuous mobile authentication schemes has increased in recent years. These schemes use sensors on mobile devices to collect the biometric data about a user. The use of multiple sensors in a multi-modal scheme has been shown to improve the accuracy. However, sensor scores are often combined using simplistic techniques such as averaging. To date, the effect of uncertainty in score fusion has not been explored. In this paper, we present a novel Dempster-Shafer based score fusion approach for continuous authentication schemes. Our approach combines the sensor scores factoring in the uncertainty of the sensor. We propose and evaluate five techniques for computing uncertainty. Our proof-of-concept system is tested on three state-of-the-art datasets and compared with common fusion techniques. We find that our proposed approach yields the highest accuracies compared to the other fusion techniques and achieves equal error rates as low as 8.05%.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114567193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
FLDet: A CPU Real-time Joint Face and Landmark Detector FLDet:一种CPU实时联合人脸和地标检测器
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987289
Chubin Zhuang, Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Jinqiao Wang, S. Li
{"title":"FLDet: A CPU Real-time Joint Face and Landmark Detector","authors":"Chubin Zhuang, Shifeng Zhang, Xiangyu Zhu, Zhen Lei, Jinqiao Wang, S. Li","doi":"10.1109/ICB45273.2019.8987289","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987289","url":null,"abstract":"Face detection and alignment are considered as two independent tasks and conducted sequentially in most face applications. However, these two tasks are highly related and they can be integrated into a single model. In this paper, we propose a novel single-shot detector for joint face detection and alignment, namely FLDet, with remarkable performance on both speed and accuracy. Specifically, the FLDet consists of three main modules: Rapidly Digested Backbone (RDB), Lightweight Feature Pyramid Network (LFPN) and Multi-task Detection Module (MDM). The RDB quickly shrinks the spatial size of feature maps to guarantee the CPU real-time speed. The LFPN integrates different detection layers in a top-down fashion to enrich the feature of low-level layers with little extra time overhead. The MDM jointly performs face and landmark detection over different layers to handle faces of various scales. Besides, we introduce a new data augmentation strategy to take full usage of the face alignment dataset. As a result, the proposed FLDet can run at 20 FPS on a single CPU core and 120 FPS using a GPU for VGA-resolution images. Notably, the FLDet can be trained end-to-end and its inference time is invariant to the number of faces. We achieve competitive results on both face detection and face alignment benchmark datasets, including AFW, PASCAL FACE, FDDB and AFLW.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134024941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Suppressing Gender and Age in Face Templates Using Incremental Variable Elimination 用增量变量消除法抑制人脸模板中的性别和年龄
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987331
Philipp Terhörst, N. Damer, Florian Kirchbuchner, Arjan Kuijper
{"title":"Suppressing Gender and Age in Face Templates Using Incremental Variable Elimination","authors":"Philipp Terhörst, N. Damer, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/ICB45273.2019.8987331","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987331","url":null,"abstract":"Recent research on soft-biometrics showed that more information than just the person’s identity can be deduced from biometric data. Using face templates only, information about gender, age, ethnicity, health state of the person, and even the sexual orientation can be automatically obtained. Since for most applications these templates are expected to be used for recognition purposes only, this raises major privacy issues. Previous work addressed this problem purely on image level regarding function creep attackers without knowledge about the systems privacy mechanism. In this work, we propose a soft-biometric privacy enhancing approach that reduces a given biometric template by eliminating its most important variables for predicting soft-biometric attributes. Training a decision tree ensemble allows deriving a variable importance measure that is used to incrementally eliminate variables that allow predicting sensitive attributes. Unlike previous work, we consider a scenario of function creep attackers with explicit knowledge about the privacy mechanism and evaluated our approach on a publicly available database. The experiments were conducted to eight baseline solutions. The results showed that in many cases IVE is able to suppress gender and age to a high degree with a negligible loss of the templates recognition ability. Contrary to previous work, which is limited to the suppression of binary (gender) attributes, IVE is able, by design, to suppress binary, categorical, and continuous attributes.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117070300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Face Sketch Colorization via Supervised GANs 基于监督gan的人脸素描着色
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987296
S. RamyaY., Soumyadeep Ghosh, Mayank Vatsa, Richa Singh
{"title":"Face Sketch Colorization via Supervised GANs","authors":"S. RamyaY., Soumyadeep Ghosh, Mayank Vatsa, Richa Singh","doi":"10.1109/ICB45273.2019.8987296","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987296","url":null,"abstract":"Face sketch recognition is one of the most challenging heterogeneous face recognition problems. The large domain difference of hand-drawn sketches and color photos along with the subjectivity/variations due to eye-witness descriptions and skill of sketch artists makes the problem demanding. Therefore, despite several research attempts, sketch to photo matching is still considered an arduous problem. In this research, we propose to transform a hand-drawn sketch to a color photo using an end to end two-stage generative adversarial model followed by learning a discriminative classifier for matching the transformed images with color photos. The proposed image to image transformation model reduces the modality gap of the sketch images and color photos resulting in higher identification accuracies and images with better visual quality than the ground truth sketch images.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123600105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SEFD: A Simple and Effective Single Stage Face Detector SEFD:一个简单有效的单级人脸检测器
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987231
Lei Shi, Xiang Xu, I. Kakadiaris
{"title":"SEFD: A Simple and Effective Single Stage Face Detector","authors":"Lei Shi, Xiang Xu, I. Kakadiaris","doi":"10.1109/ICB45273.2019.8987231","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987231","url":null,"abstract":"Recently, the state-of-the-art face detectors are extending a backbone network by adding more feature fusion and context extractor layers to localize multi-scale faces. Therefore, they are struggling to balance the computational efficiency and performance of face detectors. In this paper, we introduce a simple and effective face detector (SEFD). SEFD leverages a computationally light-weight Feature Aggregation Module (FAM) to achieve high computational efficiency of feature fusion and context enhancement. In addition, the aggregation loss is introduced to mitigate the imbalance of the power of feature representation for the classification and regression tasks due to the backbone network initialized by the pre-trained model that focuses on the classification task other than both the regression and classification tasks. SEFD achieves state-of-the-art performance on the UFDD dataset and mAPs of 95.3%, 94.1%, 88.3% and 94.9%, 94.0%, 88.2% on the easy, medium and hard subsets of WIDER Face validation and testing datasets, respectively.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117165296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deep Contactless Fingerprint Unwarping 深度非接触式指纹解锁
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987292
Ali Dabouei, Sobhan Soleymani, J. Dawson, N. Nasrabadi
{"title":"Deep Contactless Fingerprint Unwarping","authors":"Ali Dabouei, Sobhan Soleymani, J. Dawson, N. Nasrabadi","doi":"10.1109/ICB45273.2019.8987292","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987292","url":null,"abstract":"Contactless fingerprints have emerged as a convenient, inexpensive, and hygienic way of capturing fingerprint samples. However, cross-matching contactless fingerprints to the legacy contact-based fingerprints is a challenging task due to the elastic and perspective distortion between the two modalities. Current cross-matching methods merely rectify the elastic distortion of the contact-based samples to reduce the geometric mismatch and ignore the perspective distortion of contactless fingerprints. Adopting classical deformation correction techniques to compensate for the perspective distortion requires a large number of minutiae-annotated contactless fingerprints. However, annotating minutiae of contactless samples is a labor-intensive and inaccurate task especially for regions which are severely distorted by the perspective projection. In this study, we propose a deep model to rectify the perspective distortion of contactless fingerprints by combining a rectification and a ridge enhancement network. The ridge enhancement network provides indirect supervision for training the rectification network and removes the need for the ground truth values of the estimated warp parameters. Comprehensive experiments using two public datasets of contactless fingerprints show that the proposed unwarping approach, on average, results in a 17% increase in the number of detectable minutiae from contactless fingerprints. Consequently, the proposed model achieves the equal error rate of 7.71% and Rank-1 accuracy of 61.01% on the challenging dataset of ‘2D/3D’ fingerprints.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129903369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Does Generative Face Completion Help Face Recognition? 生成人脸补全是否有助于人脸识别?
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987388
Joe Mathai, I. Masi, Wael AbdAlmageed
{"title":"Does Generative Face Completion Help Face Recognition?","authors":"Joe Mathai, I. Masi, Wael AbdAlmageed","doi":"10.1109/ICB45273.2019.8987388","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987388","url":null,"abstract":"Face occlusions, covering either the majority or discriminative parts of the face, can break facial perception and produce a drastic loss of information. Biometric systems such as recent deep face recognition models are not immune to obstructions or other objects covering parts of the face. While most of the current face recognition methods are not optimized to handle occlusions, there have been a few attempts to improve robustness directly in the training stage. Unlike those, we propose to study the effect of generative face completion on the recognition. We offer a face completion encoder-decoder, based on a convolutional operator with a gating mechanism, trained with an ample set of face occlusions. To systematically evaluate the impact of realistic occlusions on recognition, we propose to play the occlusion game: we render 3D objects onto different face parts, providing precious knowledge of what the impact is of effectively removing those occlusions. Extensive experiments on the Labeled Faces in the Wild (LFW), and its more difficult variant LFW-BLUFR, testify that face completion is able to partially restore face perception in machine vision systems for improved recognition.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128664281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
On the Effectiveness of Laser Speckle Contrast Imaging and Deep Neural Networks for Detecting Known and Unknown Fingerprint Presentation Attacks 激光散斑对比成像和深度神经网络检测已知和未知指纹表示攻击的有效性研究
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987428
H. Mirzaalian, Mohamed E. Hussein, W. Abd-Almageed
{"title":"On the Effectiveness of Laser Speckle Contrast Imaging and Deep Neural Networks for Detecting Known and Unknown Fingerprint Presentation Attacks","authors":"H. Mirzaalian, Mohamed E. Hussein, W. Abd-Almageed","doi":"10.1109/ICB45273.2019.8987428","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987428","url":null,"abstract":"Fingerprint presentation attack detection (FPAD) is becoming an increasingly challenging problem due to the continuous advancement of attack techniques, which generate \"realistic-looking\" fake fingerprint presentations. Recently, laser speckle contrast imaging (LSCI) has been introduced as a new sensing modality for FPAD. LSCI has the interesting characteristic of capturing the blood flow under the skin surface. Toward studying the importance and effectiveness of LSCI for FPAD, we conduct a comprehensive study using different patch-based deep neural network architectures. Our studied architectures include 2D and 3D convo-lutional networks as well as a recurrent network using long short-term memory (LSTM) units. The study demonstrates that strong FPAD performance can be achieved using LSCI. We evaluate the different models over a new large dataset. The dataset consists of 3743 bona fide samples, collected from 335 unique subjects, and 218 presentation attack samples, including six different types of attacks. To examine the effect of changing the training and testing sets, we conduct a 3-fold cross validation evaluation. To examine the effect of the presence of an unseen attack, we apply a leave-one-attack out strategy. The FPAD classification results of the networks, which are separately optimized and tuned for the temporal and spatial patch-sizes, indicate that the best performance is achieved by LSTM.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128221255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信