2021 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Deep Multi-loss Hashing Network for Palmprint Retrieval and Recognition 基于深度多损失哈希网络的掌纹检索与识别
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484403
Wei Jia, Shuwei Huang, Bin Wang, Lunke Fei, Yang Zhao, Hai Min
{"title":"Deep Multi-loss Hashing Network for Palmprint Retrieval and Recognition","authors":"Wei Jia, Shuwei Huang, Bin Wang, Lunke Fei, Yang Zhao, Hai Min","doi":"10.1109/IJCB52358.2021.9484403","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484403","url":null,"abstract":"With the wide application of biometrics technology, the scale of biometrics databases is increasing rapidly. In this situation, fast retrieval technology is more and more necessary for large-scale biometrics retrieval and recognition. Palmprint recognition is one of the emerging biometrics technologies. However, the research on fast palmprint retrieval algorithm is still preliminary. Hashing is one of the most popular image retrieval technologies due to its fast speed and low storage cost. In this paper, we propose a new deep palmprint hashing method, which integrates classification loss, pairing loss and quantization loss in a unified deep learning framework. Experimental results show that the proposed deep multi-loss hashing method has better performance for palmprint recognition and retrieval than other existing classic hashing methods.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131026803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Temporal Memory-based Continuous Authentication System 基于时间内存的连续认证系统
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484365
S. Gopal, Diksha Shukla
{"title":"A Temporal Memory-based Continuous Authentication System","authors":"S. Gopal, Diksha Shukla","doi":"10.1109/IJCB52358.2021.9484365","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484365","url":null,"abstract":"With the emerging use of technology, verifying a user’s identity continuously throughout a device’s usage has become increasingly important. This paper proposes an authentication system that unobtrusively verifies a user’s identity continuously, based on his/her hand movement patterns captured using accelerometer, while a user performs free-text typing. Our model validates a user’s identity with a verification decision in every ≈ 20ms interval. The authentication model utilizes a short temporal memory of size M of a user’s hand movement patterns. Experiments on different values of M suggests that the model shows an improved and consistent performance by increasing the size of the temporal memory of a user’s hand movement patterns to M ≈ 300ms.The authentication system requires only a user’s hand movement signals in order to authenticate a user on a device. Experiments on the hand movement patterns of 27 volunteer participants, captured using motion sensors of a Sony Smartwatch while they performed free-text typing on a desktop/laptop device, show that our model could achieve an average authentication accuracy of 99.8% with an average False Accept Rate (FAR) of 0.0003 and an average False Reject Rate (FRR) of 0.0034.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134202131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
RamFace: Race Adaptive Margin Based Face Recognition for Racial Bias Mitigation RamFace:基于种族自适应边缘的种族偏见缓解人脸识别
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484352
Zhanjia Yang, Xiangping Zhu, Changyuan Jiang, Wenshuang Liu, Linlin Shen
{"title":"RamFace: Race Adaptive Margin Based Face Recognition for Racial Bias Mitigation","authors":"Zhanjia Yang, Xiangping Zhu, Changyuan Jiang, Wenshuang Liu, Linlin Shen","doi":"10.1109/IJCB52358.2021.9484352","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484352","url":null,"abstract":"Recent studies show that there exist significant racial bias among state-of-the-art (SOTA) face recognition algorithms, i.e., the accuracy for Caucasian is consistently higher than that for other races like African and Asian. To mitigate racial bias, we propose the race adaptive margin based face recognition (RamFace) model, designed under the multi-task learning framework with the race classification as the auxiliary task. The experiments show that the race classification task can enforce the model to learn the racial features and thus improve the discriminability of the extracted feature representations. In addition, a racial bias robust loss function, i.e., race adaptive margin loss, is proposed such that different optimal margins can be automatically derived for different races in training the model, which further mitigates the racial bias. The experimental results show that on RFW dataset, our model not only achieves SOTA face recognition accuracy but also mitigates the racial bias problem. Besides, RamFace is also tested on several public face recognition evaluation benchmarks, i.e., LFW, CPLFW and CALFW, and achieves better performance than the commonly used face recognition methods, which justifies the generalization capability of RamFace.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Feasibility of Morphing-Attacks in Vascular Biometrics 血管生物识别中变形攻击的可行性
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484372
Altan K. Aydemir, Jutta Hämmerle-Uhl, A. Uhl
{"title":"Feasibility of Morphing-Attacks in Vascular Biometrics","authors":"Altan K. Aydemir, Jutta Hämmerle-Uhl, A. Uhl","doi":"10.1109/IJCB52358.2021.9484372","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484372","url":null,"abstract":"For the first time, the feasibility of creating morphed samples for attacking vascular biometrics is investigated, in particular finger vein recognition schemes are addressed. A conducted vulnerability analysis reveals that (i) the extent of vulnerability, (ii) the type of most vulnerable recognition scheme, and (iii) the preferred way to determine the best morph sample for a given target sample depends on the employed sensor. Digital morphs represent a significant threat as vulnerability in terms of IAPMR is often found to be > 0.8 or > 0.6 (in sensor dependent manner). Physical artefacts created from these morphs lead to clearly lower vulnerability (with IAPMR ≤ 0.25), however, this has to be attributed to the low quality of the artefacts (and is expected be increase for better artefact quality).","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133146730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PointFace: Point Set Based Feature Learning for 3D Face Recognition PointFace:基于点集的三维人脸识别特征学习
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484368
Changyuan Jiang, Shisong Lin, Wei Chen, Feng Liu, Linlin Shen
{"title":"PointFace: Point Set Based Feature Learning for 3D Face Recognition","authors":"Changyuan Jiang, Shisong Lin, Wei Chen, Feng Liu, Linlin Shen","doi":"10.1109/IJCB52358.2021.9484368","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484368","url":null,"abstract":"Though 2D face recognition (FR) has achieved great success due to powerful 2D CNNs and large-scale training data, it is still challenged by extreme poses and illumination conditions. On the other hand, 3D FR has the potential to deal with aforementioned challenges in the 2D domain. However, most of available 3D FR works transform 3D surfaces to 2D maps and utilize 2D CNNs to extract features. The works directly processing point clouds for 3D FR is very limited in literature. To bridge this gap, in this paper, we propose a light-weight framework, named PointFace, to directly process point set data for 3D FR. Inspired by contrastive learning, our PointFace use two weight-shared encoders to directly extract features from a pair of 3D faces. A feature similarity loss is designed to guide the encoders to obtain discriminative face representations. We also present a pair selection strategy to generate positive and negative pairs to boost training. Extensive experiments on Lock3DFace and Bosphorus show that the proposed PointFace outperforms state-of-the-art 2D CNN based methods.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126002038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Message from General Chairs of IJCB 2021 2021年国际jcb大会主席致辞
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/ijcb52358.2021.9521650
{"title":"Message from General Chairs of IJCB 2021","authors":"","doi":"10.1109/ijcb52358.2021.9521650","DOIUrl":"https://doi.org/10.1109/ijcb52358.2021.9521650","url":null,"abstract":"","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"18 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114024991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High Quality Facial Data Synthesis and Fusion for 3D Low-quality Face Recognition 面向3D低质量人脸识别的高质量人脸数据合成与融合
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484339
Shisong Lin, Changyuan Jiang, Feng Liu, Linlin Shen
{"title":"High Quality Facial Data Synthesis and Fusion for 3D Low-quality Face Recognition","authors":"Shisong Lin, Changyuan Jiang, Feng Liu, Linlin Shen","doi":"10.1109/IJCB52358.2021.9484339","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484339","url":null,"abstract":"3D face recognition (FR) is a popular topic in computer vision, since 3D face data is invariant to pose and illumination condition changes which easily affect the performance of 2D FR. Though many 3D solutions have achieved impressive performances on public high-quality 3D face databases, few works concentrate on low-quality 3D FR. As the quality of 3D face acquired by widely used low-cost RGB-D sensors is really low, more robust methods are required to achieve satisfying performance on these 3D face data. To address this issue, we propose a novel two-stage pipeline to improve the performance of 3D FR. In the first stage, we utilize pix2pix network to restore the quality of low-quality face. In the second stage, we launch a multi-quality fusion network (MQFNet) to fuse the features from different qualities and enhance FR performance. Our proposed network achieves the state-of-the-art performance on the Lock3DFace database. Furthermore, extensive controlled experiments are conducted to demonstrate the effectiveness of each model of our network.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114156894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Robust End-to-End Hand Identification via Holistic Multi-Unit Knuckle Recognition 基于整体多单元关节识别的鲁棒端到端手部识别
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484356
Ritesh Vyas, Hossein Rahmani, Ricki Boswell-Challand, P. Angelov, Sue Black, Bryan M. Williams
{"title":"Robust End-to-End Hand Identification via Holistic Multi-Unit Knuckle Recognition","authors":"Ritesh Vyas, Hossein Rahmani, Ricki Boswell-Challand, P. Angelov, Sue Black, Bryan M. Williams","doi":"10.1109/IJCB52358.2021.9484356","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484356","url":null,"abstract":"In many cases of serious crime, images of a hand can be the only evidence available for the forensic identification of the offender. As well as placing them at the scene, such images and video evidence offer proof of the offender committing the crime. The knuckle creases of the human hand have emerged as an effective biometric trait and been used to identify the perpetrators of child abuse in forensic investigations. However, manual utilization of knuckle creases for identification is highly time consuming and can be subjective, requiring the expertise of experienced forensic anthropologists whose availability is very limited. Hence, there arises a need for an automated approach for localization and comparison of knuckle patterns. In this paper, we present a fully automatic end-to-end approach which localizes the minor, major and base knuckles in images of the hand, and effectively uses them for identification achieving state-of-the-art results. This work improves on existing approaches and allows us to strengthen cases further by objectively combining multiple knuckles and knuckle types to obtain a holistic matching result for comparing two hands. This yields a stronger and more robust multi-unit biometric and facilitates the large-scale examination of the potential of knuckle-based identification. Evaluated on two large landmark datasets, the proposed framework achieves equal error rates (EER) of 1.0-1.9%, rank-1 accuracies of 99.3-100% and decidability indices of 5.04-5.83. We make the full results available via a novel online GUI to raise awareness with the general public and forensic investigators about the identifiability of various knuckle regions. These strong results demonstrate the value of our holistic approach to hand identification from knuckle patterns and their utility in forensic investigations.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114692319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Sequential Interactive Biased Network for Context-Aware Emotion Recognition 上下文感知情感识别的顺序交互偏见网络
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484370
Xinpeng Li, Xiaojiang Peng, Changxing Ding
{"title":"Sequential Interactive Biased Network for Context-Aware Emotion Recognition","authors":"Xinpeng Li, Xiaojiang Peng, Changxing Ding","doi":"10.1109/IJCB52358.2021.9484370","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484370","url":null,"abstract":"Emotion context information is crucial yet complicated for emotion recognition. How to process it is a challenging problem. Existing works mainly extract context representations of the face, body and scene independently. These strategies may be limited in the understanding of emotional context relation. To address this problem, we propose Sequential Interactive Biased Network (SIB-Net), which is motivated by the studies that the context contains sequential, interactive and biased relation. Specifically, SIB-Net captures and utilizes the context relation by three modules: i) a Sequential Context Module captures consecutive relation with a GRU-like architecture, ii) an Interactive Context Module acquires cooperative context with global correlated linear fusion, and iii) a Biased Context Module benefits from the biased relation with distribution labels and the L1 loss. Extensive experiments on EMOTIC and CAER datasets show that our SIB-Net improves baseline significantly and achieves comparable results to the state-of-the-art methods.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114602264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Contrastive Uncertainty Learning for Iris Recognition with Insufficient Labeled Samples 标记样本不足情况下虹膜识别的对比不确定性学习
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484388
Jianze Wei, R. He, Zhenan Sun
{"title":"Contrastive Uncertainty Learning for Iris Recognition with Insufficient Labeled Samples","authors":"Jianze Wei, R. He, Zhenan Sun","doi":"10.1109/IJCB52358.2021.9484388","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484388","url":null,"abstract":"Cross-database recognition is still an unavoidable challenge when deploying an iris recognition system to a new environment. In the paper, we present a compromise problem that resembles the real-world scenario, named iris recognition with insufficient labeled samples. This new problem aims to improve the recognition performance by utilizing partially-or un-labeled data. To address the problem, we propose Contrastive Uncertainty Learning (CUL) by integrating the merits of uncertainty learning and contrastive self-supervised learning. CUL makes two efforts to learn a discriminative and robust feature representation. On the one hand, CUL explores the uncertain acquisition factors and adopts a probabilistic embedding to represent the iris image. In the probabilistic representation, the identity information and acquisition factors are disentangled into the mean and variance, avoiding the impact of uncertain acquisition factors on the identity information. On the other hand, CUL utilizes probabilistic embeddings to generate virtual positive and negative pairs. Then CUL builds its contrastive loss to group the similar samples closely and push the dissimilar samples apart. The experimental results demonstrate the effectiveness of the proposed CUL for iris recognition with insufficient labeled samples.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121679513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信