2021 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Bita-Net: Bi-temporal Attention Network for Facial Video Forgery Detection Bita-Net:用于人脸视频伪造检测的双时间注意网络
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484408
Yiwei Ru, Wanting Zhou, Yunfan Liu, Jianxin Sun, Qi Li
{"title":"Bita-Net: Bi-temporal Attention Network for Facial Video Forgery Detection","authors":"Yiwei Ru, Wanting Zhou, Yunfan Liu, Jianxin Sun, Qi Li","doi":"10.1109/IJCB52358.2021.9484408","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484408","url":null,"abstract":"Deep forgery detection on video data has attracted remarkable research attention in recent years due to its potential in defending forgery attacks. However, existing methods either only focus on the visual evidence within individual images, or are too sensitive to fluctuations across frames. To address these issues, this paper propose a novel model, named Bita-Net, to detect forgery faces in video data. The network design of Bita-Net is inspired by the mechanism of how human beings detect forgery data, i.e. browsing and scrutinizing, which is reflected by the two-pathway architecture of Bita-Net. Concretely, the browsing pathway scans the entire video at a high frame rate to check the temporal consistency, while the scrutinizing pathway focuses on analyzing key frames of the video at a lower frame rate. Furthermore, an attention branch is introduced to improve the forgery detection ability of the scrutinizing pathway. Extensive experiment results demonstrate the effectiveness and generalization ability of Bita-Net on various popular face forensics detection datasets, including FaceForensics++, CelebDF, DeepfakeTIMIT and UADFV.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114435122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
YakReID-103: A Benchmark for Yak Re-Identification YakReID-103:牦牛再鉴定基准
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484341
Tingting Zhang, Qijun Zhao, Cuo Da, Liyuan Zhou, Lei Li, Suonan Jiancuo
{"title":"YakReID-103: A Benchmark for Yak Re-Identification","authors":"Tingting Zhang, Qijun Zhao, Cuo Da, Liyuan Zhou, Lei Li, Suonan Jiancuo","doi":"10.1109/IJCB52358.2021.9484341","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484341","url":null,"abstract":"Precision livestock management requires animal traceability and disease trajectory, for which discriminating between or re-identifying individual animals is of significant importance. Existing re-identification (re-ID) methods are mostly proposed for persons and vehicles, compared with which animals are extraordinarily more challenging to be re-identified because of subtle visual differences between individuals. In this paper, we focus on image-based re-ID of yaks (Bos grunniens), which are indispensable livestock in local animal husbandry economy in Qinghai-Tibet Plateau. We establish the first yak re-ID dataset (called YakReID-103) which contains 2, 247 images of 103 different yaks with bounding box, direction-based pose, and identity annotations. Moreover, according to the characteristics of yaks, we modifiy several person re-ID and animal re-ID methods as baselines for yak re-ID. Experimental results of the baselines on YakReID-103 demonstrate the challenges in yak re-ID. We expect that the proposed benchmark will promote the research of animal biometrics and extend the application scope of re-ID techniques.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128295243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploiting Non-uniform Inherent Cues to Improve Presentation Attack Detection 利用非统一的内在线索改进表示攻击检测
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484389
Yaowen Xu, Zhuming Wang, Hu Han, Lifang Wu, Yongluo Liu
{"title":"Exploiting Non-uniform Inherent Cues to Improve Presentation Attack Detection","authors":"Yaowen Xu, Zhuming Wang, Hu Han, Lifang Wu, Yongluo Liu","doi":"10.1109/IJCB52358.2021.9484389","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484389","url":null,"abstract":"Face anti-spoofing plays a vital role in face recognition systems. The existed deep learning approaches have effectively improved the performance of presentation attack detection (PAD). However, they learn a uniform feature for different types of presentation attacks, which ignore the diversity of the inherent cues presented in different spoofing types. As a result, they can not effectively represent the intrinsic difference between different spoof faces and live faces, and the performance drops on the cross-domain databases. In this paper, we introduce the inherent cues of different spoofing types by non-uniform learning as complements to uniform features. Two lightweight sub-networks are designed to learn inherent motion patterns from photo attacks and the inherent texture cues from video attacks. Furthermore, an element-wise weighting fusion strategy is proposed to integrate the non-uniform inherent cues and uniform features. Extensive experiments on four public databases demonstrate that our approach outperforms the state-of-the-art methods and achieves a superior performance of 3.7% ACER in the cross-domain Protocol 4 of the Oulu-NPU database. Code is available at https://github.com/BJUT-VIP/Non-uniform-cues.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133157293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Gender-Invariant Face Representation Learning and Data Augmentation for Kinship Verification 性别不变的人脸表征学习和亲属关系验证的数据增强
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484358
Yuqing Feng, Bo Ma
{"title":"Gender-Invariant Face Representation Learning and Data Augmentation for Kinship Verification","authors":"Yuqing Feng, Bo Ma","doi":"10.1109/IJCB52358.2021.9484358","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484358","url":null,"abstract":"Different from conventional face recognition, the gender discrepancy between parent and child is an inevitable issue for kinship verification. Father and daughter, or mother and son, may have different facial features due to gender differences, which renders kinship verification difficult. In view of this, this paper proposes a gender-invariant feature extraction and image-to-image translation network (Gender-FEIT) that learns a gender invariant face representation and produces the transgendered images simultaneously. In Gender-FEIT, the male (female) face is first projected to a feature representation through an encoder, then the representation is transformed into a female (male) face through the specific generator. A gender discriminator is imposed on the encoder, forcing to learn a gender invariant representation in an adversarial way. This representation preserves the high-level personal information of the input face but removes gender information, which is applicable to cross-gender kinship verification. Moreover, the competition between generators and image discriminators encourages to generate realistic-looking faces that can enlarge kinship datasets. This novel data augmentation method significantly improves the performance of kinship verification. Experimental results demonstrate the effectiveness of our method on two most widely used kinship databases.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"156 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128764783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Preserving Gender and Identity in Face Age Progression of Infants and Toddlers 婴儿和学步儿童面部年龄发展中的性别和身份保护
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484330
Yao Xiao, Yijun Zhao
{"title":"Preserving Gender and Identity in Face Age Progression of Infants and Toddlers","authors":"Yao Xiao, Yijun Zhao","doi":"10.1109/IJCB52358.2021.9484330","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484330","url":null,"abstract":"Realistic age-progressed photos provide invaluable biometric information in a wide range of applications. In recent years, deep learning-based approaches have made remarkable progress in modeling the aging process of the human face. Nevertheless, it remains a challenging task to generate accurate age-progressed faces from infant or toddler photos. In particular, the lack of visually detectable gender characteristics and the drastic appearance changes in early life contribute to the difficulty of the task. We address this challenge by extending the CAAE (2017) architecture to 1) incorporate gender information and 2) augment the model’s overall architecture with an identity-preserving component based on facial features. We trained our model using the publicly available UTKFace dataset and evaluated our model by simulating up to 100 years of age progression on 1,156 male and 1,207 female infant and toddler face photos. Compared to the CAAE approach, our new model demonstrates noticeable visual improvements. Quantitatively, our model exhibits an overall gain of 77.0% (male) and 13.8% (female) in gender fidelity measured by a gender classifier for the simulated photos across the age spectrum. Our model also demonstrates a 22.4% gain in identity preservation measured by a facial recognition neural network.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117301086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Refining Single Low-Quality Facial Depth Map by Lightweight and Efficient Deep Model 通过轻量级和高效的深度模型提炼单个低质量的面部深度图
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484381
Guodong Mu, Di Huang, Weixin Li, Guosheng Hu, Yunhong Wang
{"title":"Refining Single Low-Quality Facial Depth Map by Lightweight and Efficient Deep Model","authors":"Guodong Mu, Di Huang, Weixin Li, Guosheng Hu, Yunhong Wang","doi":"10.1109/IJCB52358.2021.9484381","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484381","url":null,"abstract":"Consumer depth sensors have become increasingly common, however, the data are rather coarse and noisy, which is problematic to delicate tasks, such as 3D face modeling and 3D face recognition. In this paper, we present a novel and lightweight 3D Face Refinement Model (3D-FRM), to effectively and efficiently improve the quality of such single facial depth maps. 3D-FRM has an encoder-decoder structure, where the encoder applies depth-wise, point-wise convolutions and the fusion of features of different receptive fields to capture original discriminative information, and the decoder exploits sub-pixel convolutions and the combination of low- and high-level features to achieve strong shape recovery. We also propose a joint loss function to smooth facial surfaces and preserve their identities. In addition, we contribute a large dataset with low- and high-quality 3D face pairs to facilitate this research. Extensive experiments are conducted on the Bosphorus and Lock3DFace datasets, and results show the competency of the proposed method at ameliorating both visual quality and recognition accuracy. Code and data will be available at https://github.com/muyouhang/3D-FRM.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130731960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Finger Vein Verification using Intrinsic and Extrinsic Features 基于内在和外在特征的手指静脉验证
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484401
Liying Lin, Haozhe Liu, Wentian Zhang, Feng Liu, Zhihui Lai
{"title":"Finger Vein Verification using Intrinsic and Extrinsic Features","authors":"Liying Lin, Haozhe Liu, Wentian Zhang, Feng Liu, Zhihui Lai","doi":"10.1109/IJCB52358.2021.9484401","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484401","url":null,"abstract":"Finger vein has attracted substantial attention due to its good security. However, the variability of the finger vein data will be caused by the illumination, environment temperature, acquisition equipment, and so on, which is a great challenge for finger vein recognition. To address this problem, we propose a novel method to design an endto-end deep Convolutional Neural Network (CNN) for robust finger vein recognition. The approach mainly includes an Intrinsic Feature Learning (IFL) module using an auto-encoder network and an Extrinsic Feature Learning (EFL) module based on a Siamese network. The IFL module is designed to estimate the expectation of intra-class finger vein images with various offsets and rotation, while the EFL module is constructed to learn the inter-class feature representation. Then, robust verification is finally achieved by considering the distances of both intrinsic and extrinsic features. We conduct experiments on two public datasets (i.e. SDUMLA-HMT and MMCBNU_6000) and an in-house dataset (MultiView-FV) with more deformation finger vein images, and the equal error rate (EER) is 0.47%, 0.1%, and 1.69% respectively. The comparison against baseline and existing algorithms shows the effectiveness of our proposed method.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"62 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120820872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Large-scale Database for Less Cooperative Iris Recognition 面向非协同虹膜识别的大规模数据库
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484357
Junxing Hu, Leyuan Wang, Zhengquan Luo, Yunlong Wang, Zhenan Sun
{"title":"A Large-scale Database for Less Cooperative Iris Recognition","authors":"Junxing Hu, Leyuan Wang, Zhengquan Luo, Yunlong Wang, Zhenan Sun","doi":"10.1109/IJCB52358.2021.9484357","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484357","url":null,"abstract":"Since the outbreak of the COVID-19 pandemic, iris recognition has been used increasingly as contactless and unaffected by face masks. Although less user cooperation is an urgent demand for existing systems, corresponding manually annotated databases could hardly be obtained. This paper presents a large-scale database of near-infrared iris images named CASIA-Iris-Degradation Version 1.0 (DV1), which consists of 15 subsets of various degraded images, simulating less cooperative situations such as illumination, off-angle, occlusion, and nonideal eye state. A lot of open-source segmentation and recognition methods are compared comprehensively on the DV1 using multiple evaluations, and the best among them are exploited to conduct ablation studies on each subset. Experimental results show that even the best deep learning frameworks are not robust enough on the database, and further improvements are recommended for challenging factors such as half-open eyes, off-angle, and pupil dilation. Therefore, we publish the DV1 with manual annotations online to promote iris recognition. (http://www.cripacsir.cn/dataset/)","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123088375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Visual-Semantic Transformer for Face Forgery Detection 人脸伪造检测的视觉语义转换器
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484407
Yuting Xu, Gengyun Jia, Huaibo Huang, Junxian Duan, R. He
{"title":"Visual-Semantic Transformer for Face Forgery Detection","authors":"Yuting Xu, Gengyun Jia, Huaibo Huang, Junxian Duan, R. He","doi":"10.1109/IJCB52358.2021.9484407","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484407","url":null,"abstract":"This paper proposes a novel Visual-Semantic Transformer (VST) to detect face forgery based on semantic aware feature relations. In face images, intrinsic feature relations exist between different semantic parsing regions. We find that face forgery algorithms always change such relations. Therefore, we start the approach by extracting Contextual Feature Sequence (CFS) using a transformer encoder to make the best abnormal feature relation patterns. Meanwhile, images are segmented as soft face regions by a face parsing module. Then we merge the CFS and the soft face regions as Visual Semantic Sequences (VSS) representing features of semantic regions. The VSS is fed into the transformer decoder, in which the relations in the semantic region level are modeled. Our method achieved 99.58% accuracy on FF++(Raw) and 96.16% accuracy on Celeb-DF. Extensive experiments demonstrate that our framework outperforms or is comparable with state-of-the-art detection methods, especially towards unseen forgery methods.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122157278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Optimizing contactless to contact-based fingerprint comparison using simple parametric warping models 使用简单的参数扭曲模型优化非接触式与基于接触式的指纹比较
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484364
Dominik Söllinger, A. Uhl
{"title":"Optimizing contactless to contact-based fingerprint comparison using simple parametric warping models","authors":"Dominik Söllinger, A. Uhl","doi":"10.1109/IJCB52358.2021.9484364","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484364","url":null,"abstract":"2D contactless to contact-based fingerprint (FP) comparison is a challenging task due to different types of distortion introduced during the capturing process. While contact-based FPs typically exhibit a wide range of elastic distortions, perspective distortions pose a problem in contactless FP imagery. In this work, we investigate three simple parametric warping models for contactless fingerprints — circular, elliptical and bidirectional warping — and show that these models can be used to improve the interoperability between the two modalities by simulating unfolding of a generic 3D model. Additionally, we employ score fusion as a technique to enhance the comparison performance in scenarios where multiple contactless FPs of the same finger are available. Using the simple circular warping, we have been able to decrease the Equal Error Rate (EER) from 1.79% to 0.78% and 1.82% to 1.31% on our dataset, respectively.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123272252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信