2020 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Fingerprint Feature Extraction by Combining Texture, Minutiae, and Frequency Spectrum Using Multi-Task CNN 基于多任务CNN结合纹理、细节和频谱的指纹特征提取
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-27 DOI: 10.1109/IJCB48548.2020.9304861
Ai Takahashi, Yoshinori Koda, Koichi Ito, T. Aoki
{"title":"Fingerprint Feature Extraction by Combining Texture, Minutiae, and Frequency Spectrum Using Multi-Task CNN","authors":"Ai Takahashi, Yoshinori Koda, Koichi Ito, T. Aoki","doi":"10.1109/IJCB48548.2020.9304861","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304861","url":null,"abstract":"Although most fingerprint matching methods utilize minutia points and/or texture of fingerprint images as fingerprint features, the frequency spectrum is also a useful feature since a fingerprint is composed of ridge patterns with its inherent frequency band. We propose a novel CNN-based method for extracting fingerprint features from texture, minutiae, and frequency spectrum. In order to extract effective texture features from local regions around the minutiae, the minutia attention module is introduced to the proposed method. We also propose new data augmentation methods, which takes into account the characteristics of fingerprint images to increase the number of images during training since we use only a public dataset in training, which includes a few fingerprint classes. Through a set of experiments using FVC2004 DB1 and DB2, we demonstrated that the proposed method exhibits the efficient performance on fingerprint verification compared with a commercial fingerprint matching software and the conventional method.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127596000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Assessing the Quality of Swipe Interactions for Mobile Biometric Systems 评估移动生物识别系统的滑动交互质量
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-27 DOI: 10.1109/IJCB48548.2020.9304858
Marco Santopietro, R. Vera-Rodríguez, R. Guest, A. Morales, A. Acien
{"title":"Assessing the Quality of Swipe Interactions for Mobile Biometric Systems","authors":"Marco Santopietro, R. Vera-Rodríguez, R. Guest, A. Morales, A. Acien","doi":"10.1109/IJCB48548.2020.9304858","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304858","url":null,"abstract":"Quality estimation is a key study in biometrics, allowing optimisation and improvement of existing authentication systems by giving a prediction on the model performance based on the goodness of the sample or the user. In this paper, we propose a quality metric for swipe gestures on mobile devices. We evaluate a quality score for subjects on enrollment and for swipe samples, we estimate three quality groups and explore the correlation between our quality score and a state-of-art biometric authentication classifier performance. A further analysis based on the combined effects of subject quality and the amount of enrollment samples is conducted, investigating if increasing or decreasing enrollment size affects the authentication performance for different quality groups. Results are shown for three different public datasets, highlighting how higher quality users score a lower equal error rate compared to medium and low quality users, while high quality samples get a higher similarity score from the classifier.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128274056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals Deep Fakes的心脏是如何跳动的?基于生物信号残差解释的深度假源检测
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-26 DOI: 10.1109/IJCB48548.2020.9304909
U. Ciftci, Ilke Demir, L. Yin
{"title":"How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals","authors":"U. Ciftci, Ilke Demir, L. Yin","doi":"10.1109/IJCB48548.2020.9304909","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304909","url":null,"abstract":"Fake portrait video generation techniques have been posing a new threat to the society with photorealistic deep fakes for political propaganda, celebrity imitation, forged evidences, and other identity related manipulations. Following these generation techniques, some detection approaches have also been proved useful due to their high classification accuracy. Nevertheless, almost no effort was spent to track down the source of deep fakes. We propose an approach not only to separate deep fakes from real videos, but also to discover the specific generative model behind a deep fake. Some pure deep learning based approaches try to classify deep fakes using CNNs where they actually learn the residuals of the generator. We believe that these residuals contain more information and we can reveal these manipulation artifacts by disentangling them with biological signals. Our key observation yields that the spatiotemporal patterns in biological signals can be conceived as a representative projection of residuals. To justify this observation, we extract PPG cells from real and fake videos and feed these to a state-of-the-art classification network for detecting the generative model per video. Our results indicate that our approach can detect fake videos with 97.29% accuracy, and the source model with 93.39% accuracy.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"13 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116772832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Cross-Spectral Periocular Recognition with Conditional Adversarial Networks 条件对抗网络的交叉光谱眼周识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-26 DOI: 10.1109/IJCB48548.2020.9304899
Kevin Hernandez-Diaz, F. Alonso-Fernandez, J. Bigün
{"title":"Cross-Spectral Periocular Recognition with Conditional Adversarial Networks","authors":"Kevin Hernandez-Diaz, F. Alonso-Fernandez, J. Bigün","doi":"10.1109/IJCB48548.2020.9304899","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304899","url":null,"abstract":"This work addresses the challenge of comparing periocular images captured in different spectra, which is known to produce significant drops in performance in comparison to operating in the same spectrum. We propose the use of Conditional Generative Adversarial Networks, trained to convert periocular images between visible and near-infrared spectra, so that biometric verification is carried out in the same spectrum. The proposed setup allows the use of existing feature methods typically optimized to operate in a single spectrum. Recognition experiments are done using a number of off-the-shelf periocular comparators based both on hand-crafted features and CNN descriptors. Using the Hong Kong Polytechnic University Cross-Spectral Iris Images Database (PolyU) as benchmark dataset, our experiments show that cross-spectral performance is substantially improved if both images are converted to the same spectrum, in comparison to matching features extracted from images in different spectra. In addition to this, we fine-tune a CNN based on the ResNet50 architecture, obtaining a cross-spectral periocular performance of EER=l%, and GAR>99% @ FAR=l%, which is comparable to the state-of-the-art with the PolyU database.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127559695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Cross-Domain Identification for Thermal-to-Visible Face Recognition 热-可见人脸识别的跨域识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-19 DOI: 10.1109/IJCB48548.2020.9304937
Cedric Nimpa Fondje, Shuowen Hu, Nathan J. Short, B. Riggan
{"title":"Cross-Domain Identification for Thermal-to-Visible Face Recognition","authors":"Cedric Nimpa Fondje, Shuowen Hu, Nathan J. Short, B. Riggan","doi":"10.1109/IJCB48548.2020.9304937","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304937","url":null,"abstract":"Recent advances in domain adaptation, especially those applied to heterogeneous facial recognition, typically rely upon restrictive Euclidean loss functions (e.g., L2 norm) which perform best when images from two different domains (e.g., visible and thermal) are co-registered and temporally synchronized. This paper proposes a novel domain adaptation framework that combines a new feature mapping sub-network with existing deep feature models, which are based on modified network architectures (e.g., VGG16 or Resnet50). This framework is optimized by introducing new cross-domain identity and domain invariance lossfunctions for thermal-to-visible face recognition, which alleviates the requirement for precisely co-registered and synchronized imagery. We provide extensive analysis of both features and loss functions used, and compare the proposed domain adaptation framework with state-of-the-art feature based domain adaptation models on a difficult dataset containing facial imagery collected at varying ranges, poses, and expressions. Moreover, we analyze the viability of the proposed framework for more challenging tasks, such as non-frontal thermal-to-visible face recognition.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127539127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Open Source Iris Recognition Hardware and Software with Presentation Attack Detection 开源虹膜识别硬件和软件与表示攻击检测
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-19 DOI: 10.1109/IJCB48548.2020.9304869
Zhaoyuan Fang, A. Czajka
{"title":"Open Source Iris Recognition Hardware and Software with Presentation Attack Detection","authors":"Zhaoyuan Fang, A. Czajka","doi":"10.1109/IJCB48548.2020.9304869","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304869","url":null,"abstract":"This paper proposes the first known to us open source hardware and software iris recognition system with presentation attack detection (PAD), which can be easily assembled for about 75 USD using Raspberry Pi board and a few peripherals. The primary goal of this work is to offer a low-cost baseline for spoof-resistant iris recognition, which may (a) stimulate research in iris PAD and allow for easy prototyping of secure iris recognition systems, (b) offer a low-cost secure iris recognition alternative to more sophisticated systems, and (c) serve as an educational platform. We propose a lightweight image complexity-guided convolutional network for fast and accurate iris segmentation, domain-specific human-inspired Binarized Statistical Image Features (BSIF) to build an iris template, and to combine 2D (iris texture) and 3D (photometric stereo-based) features for PAD. The proposed iris recognition runs in about 3.2 seconds and the proposed PAD runs in about 4.5 seconds on Raspberry Pi 3B+. The hardware specifications and all source codes of the entire pipeline are made available along with this paper.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121404670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Domain Private and Agnostic Feature for Modality Adaptive Face Recognition 模态自适应人脸识别的领域私有和不可知特征
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-10 DOI: 10.1109/IJCB48548.2020.9304895
Ying Xu, Lei Zhang, Qingyan Duan
{"title":"Domain Private and Agnostic Feature for Modality Adaptive Face Recognition","authors":"Ying Xu, Lei Zhang, Qingyan Duan","doi":"10.1109/IJCB48548.2020.9304895","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304895","url":null,"abstract":"Heterogeneous face recognition is a challenging task due to the large modality discrepancy and insufficient cross-modal samples. Most existing works focus on discriminative feature transformation, metric learning and cross-modal face synthesis. However, the fact that cross-modal faces are always coupled by domain (modality) and identity information has received little attention. Therefore, how to learn and utilize the domain-private feature and domain-agnostic feature for modality adaptive face recognition is the focus of this work. Specifically, this paper proposes a Feature Aggregation Network (FAN), which includes disentangled representation module (DRM), feature fusion module (FFM) and adaptive penalty metric (APM) learning session. First, in DRM, two subnetworks, i.e. domain-private network and domain-agnostic network are specially designed for learning modality features and identity features, respectively. Second, in FFM, the identity features are fused with domain features to achieve cross-modal bidirectional identity feature transformation, which, to a large extent, further disentangles the modality information and identity information. Third, considering that the distribution imbalance between easy and hard pairs exists in cross-modal datasets, which increases the risk of model bias, the identity preserving guided metric learning with adaptive hard pairs penalization is proposed in our FAN. The proposed APM also guarantees the cross-modality intra-class compactness and inter-class separation. Extensive experiments on benchmark cross-modal face datasets show that our FAN outperforms SOTA methods.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130215037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gender and Ethnicity Classification based on Palmprint and Palmar Hand Images from Uncontrolled Environment 基于非受控环境掌纹和手掌图像的性别和种族分类
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-08-06 DOI: 10.1109/IJCB48548.2020.9304907
Wojciech Michal Matkowski, A. Kong
{"title":"Gender and Ethnicity Classification based on Palmprint and Palmar Hand Images from Uncontrolled Environment","authors":"Wojciech Michal Matkowski, A. Kong","doi":"10.1109/IJCB48548.2020.9304907","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304907","url":null,"abstract":"Soft biometric attributes such as gender, ethnicity or age may provide useful information for biometrics and forensics applications. Researchers used, e.g., face, gait, iris, and hand, etc. to classify such attributes. Even though hand has been widely studied for biometric recognition, relatively less attention has been given to soft biometrics from hand. Previous studies of soft biometrics based on hand images focused on gender and well-controlled imaging environment. In this paper, the gender and ethnicity classification in uncontrolled environment are considered. Gender and ethnicity labels are collected and provided for subjects in a publicly available database, which contains hand images from the Internet. Five deep learning models are fine-tuned and evaluated in gender and ethnicity classification scenarios based on palmar 1) full hand, 2) segmented hand and 3) palmprint images. The experimental results indicate that for gender and ethnicity classification in uncontrolled environment, full and segmented hand images are more suitable than palmprint images.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127255739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Resist: Reconstruction of irises from templates 抵抗:从模板重建虹膜
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-07-31 DOI: 10.1109/IJCB48548.2020.9304912
Sohaib Ahmad, Benjamin Fuller
{"title":"Resist: Reconstruction of irises from templates","authors":"Sohaib Ahmad, Benjamin Fuller","doi":"10.1109/IJCB48548.2020.9304912","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304912","url":null,"abstract":"Iris recognition systems transform an iris image into a feature vector. The seminal pipeline segments an image into iris and non-iris pixels, normalizes this region into a fixed-dimension rectangle, and extracts features which are stored and called a template (Daugman, 2009). This template is stored on a system. A future reading of an iris can be transformed and compared against template vectors to determine or verify the identity of an individual. As templates are often stored together, they are a valuable target to an attacker. We show how to invert templates across a variety of iris recognition systems. Our inversion is based on a convolutional neural network architecture we call RESIST (REconStructing IriSes from Templates). We apply RESIST to a traditional Gabor filter pipeline, to a DenseNet (Huang et al., CVPR 2017)feature extractor, and to a DenseNet architecture that works without normalization. Both DenseNet feature extractors are based on the recent ThirdEye recognition system (Ahmad and Fuller, BTAS 2019). When training and testing using the ND-0405 dataset, reconstructed images demonstrate a rank-1 accuracy of 100%, 76%, and 96% respectively for the three pipelines. The core of our approach is similar to an autoencoder. To obtain high accuracy this core is integrated into an adversarial network (Goodfellow et al., NeurIPS, 2014)","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130026789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Swipe Dynamics as a Means of Authentication: Results From a Bayesian Unsupervised Approach 滑动动态作为一种认证手段:来自贝叶斯无监督方法的结果
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-07-27 DOI: 10.1109/IJCB48548.2020.9304876
Parker Lamb, Alexander Millar, Ramon Fuentes
{"title":"Swipe Dynamics as a Means of Authentication: Results From a Bayesian Unsupervised Approach","authors":"Parker Lamb, Alexander Millar, Ramon Fuentes","doi":"10.1109/IJCB48548.2020.9304876","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304876","url":null,"abstract":"The field of behavioural biometrics stands as an appealing alternative to more traditional biometric systems due to the ease of use from a user perspective and potential robustness to presentation attacks. This paper focuses its attention to a specific type of behavioural biometric utilising swipe dynamics, also referred to as touch gestures. In touch gesture authentication, a user swipes across the touchscreen of a mobile device to perform an authentication attempt. A key characteristic of touch gesture authentication and new behavioural biometrics in general is the lack of available data to train and validate models. From a machine learning perspective, this presents the classic curse of dimensionality problem and the methodology presented here focuses on Bayesian unsupervised models as they are well suited to such conditions. This paper presents results from a set of experiments consisting of 38 sessions with labelled ‘victim’ as well as blind and over-the-shoulder presentation attacks. Three models are compared using this dataset; two single-mode models: a shrunk covariance estimate and a Bayesian Gaussian distribution, as well as a Bayesian non-parametric infinite mixture of Gaussians, modelled as a Dirichlet Process. Equal error rates (EER) for the three models are compared and attention is paid to how these vary across the two single-mode models at differing numbers of enrolment samples.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128959532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信