2021 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Vulnerability Assessment and Presentation Attack Detection Using a Set of Distinct Finger Vein Recognition Algorithms 基于一套独特手指静脉识别算法的漏洞评估与表示攻击检测
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484351
Johannes Schuiki, Georg Wimmer, A. Uhl
{"title":"Vulnerability Assessment and Presentation Attack Detection Using a Set of Distinct Finger Vein Recognition Algorithms","authors":"Johannes Schuiki, Georg Wimmer, A. Uhl","doi":"10.1109/IJCB52358.2021.9484351","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484351","url":null,"abstract":"The act of presenting a forged biometric sample to a bio-metric capturing device is referred to as presentation at-tack. During the last decade this type of attack has been addressed for various biometric traits and is still a widely researched topic. This study follows the idea from a previously published work which employs the usage of twelve algorithms for finger vein recognition in order to perform an extensive vulnerability analysis on a presentation at-tack database. The present work adopts this idea and examines two already existing finger vein presentation attack databases with the goal to evaluate how hazardous these presentation attacks are from a wider perspective. Additionally, this study shows that by combining the matching scores from different algorithms, presentation attack detection can be achieved.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132253025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Child Face Age Progression and Regression using Self-Attention Multi-Scale Patch GAN 基于自注意多尺度贴片GAN的儿童面部年龄进展与回归
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484329
Praveen Kumar Chandaliya, N. Nain
{"title":"Child Face Age Progression and Regression using Self-Attention Multi-Scale Patch GAN","authors":"Praveen Kumar Chandaliya, N. Nain","doi":"10.1109/IJCB52358.2021.9484329","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484329","url":null,"abstract":"Face age progression and regression have accumulated significant dynamic research enthusiasm because of its gigantic effect on a wide scope of handy applications including finding lost/wanted persons, cross-age face recognition, amusement, and cosmetic studies. The two primary necessities of face age progression and regression, are identity preservation and aging exactitude. The existing state-of-the-art frameworks mostly focus on adult or long-span aging. In this work, we propose a child face age-progress and regress framework that generates photo-realistic face images with preserved identity.To facilitate child age synthesis, we apply a multi-scale patch discriminator learning strategy for training conditional generative adversarial nets (cGAN) which in-creases the stability of the discriminator, thereby making the learning task progressively more difficult for the generator. Moreover, we also introduce Self-Attention Block (SAB) to learn global and long-term dependencies within an internal representation of a child’s face. Thus, we present coarse-to-fine Self-Attention Multi-Scale Patch generative adversarial nets (SAMSP-GAN) model. Our new objective function, as well as multi-scale patch discrimination and, has shown both qualitative and quantitative improvements over the state-of-the-art approaches in terms of face verification, rank-1 identification, and age estimation on benchmarked children datasets.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114552448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Leveraging Adversarial Learning for the Detection of Morphing Attacks 利用对抗性学习检测变形攻击
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484383
Zander Blasingame, Chen Liu
{"title":"Leveraging Adversarial Learning for the Detection of Morphing Attacks","authors":"Zander Blasingame, Chen Liu","doi":"10.1109/IJCB52358.2021.9484383","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484383","url":null,"abstract":"An emerging threat towards face recognition systems (FRS) is face morphing attack, which involves the combination of two faces from two different identities into a singular image that would trigger an acceptance for either identity within the FRS. Many of the existing morphing attack detection (MAD) approaches have been trained and evaluated on datasets with limited variation of image characteristics, which can make the approach prone to overfitting. Additionally, there has been difficulty in developing MAD algorithms which can generalize beyond the morphing attack they were trained on, as shown by the most recent NIST FRVT MORPH report. Furthermore, the Single image based MAD (S-MAD) problem has had poor performance, especially when compared to its counterpart, Differential based MAD (D-MAD). In this work, we propose a novel architecture for training deep learning based S-MAD algorithms that leverages adversarial learning to train a more robust detector. The performance of the proposed S-MAD method is benchmarked against the state-of-the-art VGG19 based S-MAD algorithm over 36 experiments using the ISO-IEC 30107-3 evaluation metrics. The proposed method has demonstrated superior and robust detection performance of less than 5% D-EER when evaluated against different morphing attacks.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116115421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Universal Adversarial Spoofing Attacks against Face Recognition 针对人脸识别的通用对抗性欺骗攻击
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484380
Takuma Amada, Seng Pei Liew, Kazuya Kakizaki, Toshinori Araki
{"title":"Universal Adversarial Spoofing Attacks against Face Recognition","authors":"Takuma Amada, Seng Pei Liew, Kazuya Kakizaki, Toshinori Araki","doi":"10.1109/IJCB52358.2021.9484380","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484380","url":null,"abstract":"We assess the vulnerabilities of deep face recognition systems for images that falsify/spoof multiple identities simultaneously. We demonstrate that, by manipulating the deep feature representation extracted from a face image via imperceptibly small perturbations added at the pixel level using our proposed method, one can fool a face verification system into recognizing that the face image belongs to multiple different identities with a high success rate. One characteristic of the UAXs crafted with our method is that they are universal (identity-agnostic); they are successful even against identities not known in advance. For a certain deep neural network, we show that we are able to spoof almost all tested identities (99%), including those not known beforehand (not included in training). Our results indicate that a multiple-identity attack is a real threat and should be taken into account when deploying face recognition systems.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"178 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124426556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Learning Discriminative Speaker Embedding by Improving Aggregation Strategy and Loss Function for Speaker Verification 基于改进聚合策略和损失函数的说话人识别嵌入学习
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484331
Chengfang Luo, Xin Guo, Aiwen Deng, Wei Xu, Junhong Zhao, Wenxiong Kang
{"title":"Learning Discriminative Speaker Embedding by Improving Aggregation Strategy and Loss Function for Speaker Verification","authors":"Chengfang Luo, Xin Guo, Aiwen Deng, Wei Xu, Junhong Zhao, Wenxiong Kang","doi":"10.1109/IJCB52358.2021.9484331","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484331","url":null,"abstract":"The embedding-based speaker verification (SV) technology has witnessed significant progress due to the advances of deep convolutional neural networks (DCNN). However, how to improve the discrimination of speaker embedding in the open world SV task is still the focus of current research in the community. In this paper, we improve the discriminative power of speaker embedding from three-fold: (1) NeXtVLAD is introduced to aggregate frame-level features, which decomposes the high-dimensional frame-level features into a group of low-dimensional vectors before applying VLAD aggregation. (2) A multi-scale aggregation strategy (MSA) assembled with NeXtVLAD is designed with the purpose of fully extract speaker information from the frame-level feature in different hidden layers of DCNN. (3) A mutually complementary assembling loss function is proposed to train the model, which consists of a prototypical loss and a marginal-based softmax loss. Extensive experiments have been conducted on the VoxCeleb-1 dataset, and the experimental results show that our proposed system can obtain significant performance improvements compared with the baseline, and obtains new state-of-the-art results. The source code of this paper is available at https://github.com/LCF2764/Discriminative-Speaker-Embedding.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129297274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
STERLING: Towards Effective ECG Biometric Recognition 斯特林:迈向有效的心电生物识别
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484360
Kuikui Wang, Gongping Yang, Lu Yang, Yuwen Huang, Yilong Yin
{"title":"STERLING: Towards Effective ECG Biometric Recognition","authors":"Kuikui Wang, Gongping Yang, Lu Yang, Yuwen Huang, Yilong Yin","doi":"10.1109/IJCB52358.2021.9484360","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484360","url":null,"abstract":"Electrocardiogram (ECG) biometric recognition has recently attracted considerable attention and various promising approaches have been proposed. However, due to the real nonstationary ECG noise environment, it is still challenging to perform this technique robustly and precisely. In this paper, we propose a novel ECG biometrics framework named robuSt semanTic spacE leaRning with Local sImilarity preserviNG (STERLING) to learn a latent space where ECG signals can be robustly and discriminatively represented with semantic information and local structure being preserved. Specifically, in the proposed framework, a novel loss function is proposed to learn robust semantic representation by introducing l2,1-norm loss and making full use of the supervised information. In addition, a graph regularization is imposed to preserve the local structure information in each subject. Finally, in the learnt latent space, matching can be effectively done. The experimental results on three widely-used datasets indicate that the proposed framework can outperform the state-of-the-arts.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116766672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Static and Dynamic Features Analysis from Human Skeletons for Gait Recognition 用于步态识别的人体骨骼静态和动态特征分析
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484378
Ziqiong Li, Shiqi Yu, Edel B. García Reyes, Caifeng Shan, Yan-Ran Li
{"title":"Static and Dynamic Features Analysis from Human Skeletons for Gait Recognition","authors":"Ziqiong Li, Shiqi Yu, Edel B. García Reyes, Caifeng Shan, Yan-Ran Li","doi":"10.1109/IJCB52358.2021.9484378","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484378","url":null,"abstract":"Gait recognition is an effective way to identify a person due to its non-contact and long-distance acquisition. In addition, the length of human limbs and the motion pattern of human from human skeletons have been proved to be effective features for gait recognition. However, the length of human limbs and motion pattern are calculated through human prior knowledge, more important or detailed information may be missing. Our method proposes to obtain the dynamic information and static information from human skeletons through disentanglement learning. In the experiments, it has been shown that the features extracted by our method are effective.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125566295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Estimation of Gait Relative Attribute Distributions using a Differentiable Trade-off Model of Optimal and Uniform Transports 基于最优和均匀运输的可微权衡模型的步态相对属性分布估计
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484362
Yasushi Makihara, Yuta Hayashi, Allam Shehata, D. Muramatsu, Y. Yagi
{"title":"Estimation of Gait Relative Attribute Distributions using a Differentiable Trade-off Model of Optimal and Uniform Transports","authors":"Yasushi Makihara, Yuta Hayashi, Allam Shehata, D. Muramatsu, Y. Yagi","doi":"10.1109/IJCB52358.2021.9484362","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484362","url":null,"abstract":"This paper describes a method for estimating gait relative attribute distributions. Existing datasets for gait relative attributes have only three-grade annotations, which cannot be represented in the form of distributions. Thus, we first create a dataset with seven-grade annotations for five gait relative attributes (i.e., beautiful, graceful, cheerful, imposing, and relaxed). Second, we design a deep neural network to handle gait relative attribute distributions. Although the ground-truth (i.e., annotation) is given in a relative (or pairwise) manner with some degree of uncertainty (i.e., inconsistency among multiple annotators), it is desirable for the system to output an absolute attribute distribution for each gait input. Therefore, we develop a model that converts a pair of absolute attribute distributions into a relative attribute distribution. More specifically, we formulate the conversion as a transportation process from one absolute attribute distribution to the other, then derive a differentiable model that determines the trade-off between optimal transport and uniform transport. Finally, we learn the network parameters by minimizing the dissimilarity between the estimated and ground-truth distributions through the Kullback–Leibler divergence and the expectation dissimilarity. Experimental results show that the proposed method successfully estimates both absolute and relative attribute distributions.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121936734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ReSGait: The Real-Scene Gait Dataset res步态:真实场景步态数据集
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484347
Zihao Mu, F. M. Castro, M. Marín-Jiménez, Nicolás Guil Mata, Yan-Ran Li, Shiqi Yu
{"title":"ReSGait: The Real-Scene Gait Dataset","authors":"Zihao Mu, F. M. Castro, M. Marín-Jiménez, Nicolás Guil Mata, Yan-Ran Li, Shiqi Yu","doi":"10.1109/IJCB52358.2021.9484347","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484347","url":null,"abstract":"Many studies have shown that gait recognition can be used to identify humans at a long distance, with promising results on current datasets. However, those datasets are collected under controlled situations and predefined conditions, which limits the extrapolation of the results to unconstrained situations in which the subjects walk freely in scenes. To cover this gap, we release a novel real-scene gait dataset (ReSGait), which is the first dataset collected in unconstrained scenarios with freely moving subjects and not controlled environmental parameters. Overall, our dataset is composed of 172 subjects and 870 video sequences, recorded over 15 months. Video sequences are labeled with gender, clothing, carrying conditions, taken walking route, and whether mobile phones were used or not. Therefore, the main characteristics of our dataset that differentiate it from other datasets are as follows: (i) uncontrolled real-life scenes and (ii) long recording time. Finally, we empirically assess the difficulty of the proposed dataset by evaluating state-of-the-art gait approaches for silhouette and pose modalities. The results reveal an accuracy of less than 35%, showing the inherent level of difficulty of our dataset compared to other current datasets, in which accuracies are higher than 90%. Thus, our proposed dataset establishes a new level of difficulty in the gait recognition problem, much closer to real life.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Self-Augmented Heterogeneous Face Recognition 自增强异构人脸识别
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484335
Zongcai Sun, Chaoyou Fu, Mandi Luo, R. He
{"title":"Self-Augmented Heterogeneous Face Recognition","authors":"Zongcai Sun, Chaoyou Fu, Mandi Luo, R. He","doi":"10.1109/IJCB52358.2021.9484335","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484335","url":null,"abstract":"Heterogeneous face recognition (HFR) is quite challenging due to the large discrepancy introduced by cross-domain face images. The limited number of paired face images results in a severe overfitting problem in existing methods. To tackle this issue, we proposes a novel self-augmentation method named Mixed Adversarial Examples and Logits Replay (MAELR). Concretely, we first generate adversarial examples, and mix them with clean examples in an interpolating way for data augmentation. Simultaneously, we extend the definition of the adversarial examples according to cross-domain problems. Benefiting from this extension, we can reduce domain discrepancy to extract domain-invariant features. We further propose a diversity preserving loss via logits replay, which effectively uses the discriminative features obtained on the large-scale VIS dataset. In this way, we improve the feature diversity that can not be obtained from mixed adversarial examples methods. Extensive experiments demonstrate that our method alleviates the over-fitting problem, thus significantly improving the recognition performance of HFR.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115042459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信