2021 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Video-Based Physiological Measurement Using 3D Central Difference Convolution Attention Network 基于视频的三维中心差分卷积注意网络生理测量
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484405
Yu Zhao, Bochao Zou, Fan Yang, Lin Lu, Abdelkader Nasreddine Belkacem, Chao Chen
{"title":"Video-Based Physiological Measurement Using 3D Central Difference Convolution Attention Network","authors":"Yu Zhao, Bochao Zou, Fan Yang, Lin Lu, Abdelkader Nasreddine Belkacem, Chao Chen","doi":"10.1109/IJCB52358.2021.9484405","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484405","url":null,"abstract":"Remote photoplethysmography (rPPG) is a non-contact method to measure physiological signals, such as heart rate (HR) and respiratory rate (RR), from facial videos. In this paper, we constructed a central difference convolutional attention network with Huber loss to perform more robust remote physiological signal measurements. The proposed method consists of two key parts:1) Using central difference convolution to enhance the spatiotemporal representation, which can capture rich physiological related temporal context by gathering time difference information 2) Using Huber loss as the loss function, the gradient can be smoothly reduced as the loss value between the rPPG and ground truth PPG signal is closer to the minimum. Through experiments on multiple public datasets and cross-dataset evaluation, the good performance and robustness of the rPPG measurement network based on central difference convolution are verified.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128974883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
LFMB-3DFB: A Large-scale Finger Multi-Biometric Database and Benchmark for 3D Finger Biometrics LFMB-3DFB:一个大规模的手指多生物特征数据库和三维手指生物特征基准
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484369
Weili Yang, Zhuoming Chen, Junduan Huang, Linfeng Wang, Wenxiong Kang
{"title":"LFMB-3DFB: A Large-scale Finger Multi-Biometric Database and Benchmark for 3D Finger Biometrics","authors":"Weili Yang, Zhuoming Chen, Junduan Huang, Linfeng Wang, Wenxiong Kang","doi":"10.1109/IJCB52358.2021.9484369","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484369","url":null,"abstract":"Finger contains several discriminative biometric traits, including fingerprint, finger vein, finger knuckle, and finger shape, which are complementary in identity information. However, in most current researches and practical applications, only a single or several traits are utilized, which are prone to unsatisfactory recognition performance and easy forgery. Our work is the first attempt to collect and study all biometric traits on the finger. Firstly, a novel multi-view, multi-spectral 3D finger imaging system is designed. To the best of our knowledge, it is the first biometric imaging system that can capture almost all finger-based traits. With this 3D finger imaging system, we scanned numerous fingers, acquiring their external skin images and internal vein images from 6 different views. Then 3D finger models with skin and vein textures are reconstructed by space carving, mesh regularization, and texture mapping algorithms. Secondly, we establish a benchmark dataset, namely the Large- scale Finger Multi-Biometric database and benchmark for 3D Finger Biometrics (LFMB-3DFB). LFMB-3DFB contains 695 fingers, and each finger is captured 10 times. Then, 6 finger skin images and 6 finger vein images are obtained for each acquisition, and final 83,400 images and 6,950 3D finger models are obtained. Besides, we designed a more rigorous and comprehensive evaluation protocol for both identification and verification tasks. Finally, we designed corresponding baselines for 2D finger traits recognition, multi-view finger traits recognition, 3D finger traits recognition, and score-level fusion. Rigorous experiments have been conducted to verify the significance and usefulness of the proposed LFMB-3DFB.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130369663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Collaborative Feature Learning and Credible Soft Labeling for Unsupervised Domain Adaptive Person Re-Identification 无监督域自适应人再识别的协同特征学习与可信软标记
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-08-04 DOI: 10.1109/IJCB52358.2021.9484375
Haijian Wang, Meng Yang
{"title":"Collaborative Feature Learning and Credible Soft Labeling for Unsupervised Domain Adaptive Person Re-Identification","authors":"Haijian Wang, Meng Yang","doi":"10.1109/IJCB52358.2021.9484375","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484375","url":null,"abstract":"Cross-domain person ReID remains a challenging task for its difficulty in transferring knowledge from labeled source domain to unlabeled target domain. Aiming at the problem of weak interaction of cross-domain feature learning and inaccurate pseudo-label estimation in target domain, we propose a novel framework termed Collaborative Feature Learning and Credible Soft Labeling (CFSL) to achieve efficient domain adaptation for ReID. By designing a Collaborative Feature Extraction (CFE) module, a more powerful and discriminative image description is generated. Specifically, CFE jointly learn robust features by integrating both global and local clues on two domains and mining both cross-domain invariant features and domain-specific features. Moreover, we exploit a Dual Soft Labeling (DSL) strategy in target branch to obtain more credible and reliable identity estimations. Experimental results demonstrate the effectiveness of our method and show significant performance improvements over state-of-the-art methods on two public benchmarks.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130393139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MixFaceNets: Extremely Efficient Face Recognition Networks MixFaceNets:非常高效的人脸识别网络
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-07-28 DOI: 10.1109/IJCB52358.2021.9484374
F. Boutros, N. Damer, Meiling Fang, Florian Kirchbuchner, Arjan Kuijper
{"title":"MixFaceNets: Extremely Efficient Face Recognition Networks","authors":"F. Boutros, N. Damer, Meiling Fang, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IJCB52358.2021.9484374","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484374","url":null,"abstract":"In this paper, we present a set of extremely efficient and high throughput models for accurate face verification, Mix-FaceNets which are inspired by Mixed Depthwise Convolutional Kernels. Extensive experiment evaluations on Label Face in the Wild (LFW), Age-DB, MegaFace, and IARPA Janus Benchmarks IJB-B and IJB-C datasets have shown the effectiveness of our MixFaceNets for applications requiring extremely low computational complexity. Under the same level of computation complexity (≤ 500M FLOPs), our MixFaceNets outperform MobileFaceNets on all the evaluated datasets, achieving 99.60% accuracy on LFW, 97.05% accuracy on AgeDB-30, 93.60 TAR (at FAR1e-6) on MegaFace, 90.94 TAR (at FAR1e-4) on IJB-B and 93.08 TAR (at FAR1e-4) on IJB-C. With computational complexity between 500M and 1G FLOPs, our MixFaceNets achieved results comparable to the top-ranked models, while using significantly fewer FLOPs and less computation over-head, which proves the practical value of our proposed Mix-FaceNets. All training codes, pre-trained models, and training logs have been made available https://github.com/fdbtrs/mixfacenets.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130052250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Structure Destruction and Content Combination for Face Anti-Spoofing 人脸防欺骗的结构破坏与内容组合
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-07-22 DOI: 10.1109/IJCB52358.2021.9484395
Ke-Yue Zhang, Taiping Yao, Jian Zhang, Shice Liu, Bangjie Yin, Shouhong Ding, Jilin Li
{"title":"Structure Destruction and Content Combination for Face Anti-Spoofing","authors":"Ke-Yue Zhang, Taiping Yao, Jian Zhang, Shice Liu, Bangjie Yin, Shouhong Ding, Jilin Li","doi":"10.1109/IJCB52358.2021.9484395","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484395","url":null,"abstract":"In pursuit of consolidating the face verification systems, prior face anti-spoofing studies excavate the hidden cues in original images to discriminate real person and diverse attack types with the assistance of auxiliary supervision. However, limited by the following two inherent disturbances in their training process: 1) Complete facial structure in a single image. 2) Implicit subdomains in the whole dataset, these methods are prone to stick on memorization of the entire training dataset and show sensitivity to non-homologous domain distribution. In this paper, we propose Structure Destruction Module and Content Combination Module to address these two limitations separately. The former mechanism destroys images into patches to construct a non-structural input, while the latter mechanism recombines patches from different subdomains or classes into a mixup construct. Based on this splitting-and-splicing operation, Local Relation Modeling Module is further proposed to model the second-order relationship between patches. We evaluate our method on extensive public datasets and promising experimental results to demonstrate the reliability of our method against the state-of-the-art competitors.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123470647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Conditional Identity Disentanglement for Differential Face Morph Detection 差分人脸形态检测的条件身份解纠缠
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-07-05 DOI: 10.1109/IJCB52358.2021.9484355
Sudipta Banerjee, A. Ross
{"title":"Conditional Identity Disentanglement for Differential Face Morph Detection","authors":"Sudipta Banerjee, A. Ross","doi":"10.1109/IJCB52358.2021.9484355","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484355","url":null,"abstract":"We present the task of differential face morph attack detection using a conditional generative network (cGAN). To determine whether a face image in an identification document, such as a passport, is morphed or not, we propose an algorithm that learns to implicitly disentangle identities from the morphed image conditioned on the trusted reference image using the cGAN. Furthermore, the proposed method can also recover some underlying information about the second subject used in generating the morph. We performed experiments on AMSL face morph, MorGAN, and EMorGAN datasets to demonstrate the effectiveness of the proposed method. We also conducted cross-dataset and cross-attack detection experiments. We obtained promising results of 3% BPCER @ 10% APCER on intra-dataset evaluation, which is comparable to existing methods; and 4.6% BPCER @ 10% APCER on cross-dataset evaluation, which outperforms state-of-the-art methods by at least 13.9%.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132950721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Attention Aware Wavelet-based Detection of Morphed Face Images 基于注意感知的小波检测变形人脸图像
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-29 DOI: 10.1109/IJCB52358.2021.9484398
Poorya Aghdaie, Baaria Chaudhary, Sobhan Soleymani, J. Dawson, N. Nasrabadi
{"title":"Attention Aware Wavelet-based Detection of Morphed Face Images","authors":"Poorya Aghdaie, Baaria Chaudhary, Sobhan Soleymani, J. Dawson, N. Nasrabadi","doi":"10.1109/IJCB52358.2021.9484398","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484398","url":null,"abstract":"Morphed images have exploited loopholes in the face recognition checkpoints, e.g., Credential Authentication Technology (CAT), used by Transportation Security Administration (TSA), which is a non-trivial security concern. To overcome the risks incurred due to morphed presentations, we propose a wavelet-based morph detection methodology which adopts an end-to-end trainable soft attention mechanism. Our attention-based deep neural network (DNN) focuses on the salient Regions of Interest (ROI) which have the most spatial support for morph detector decision function, i.e, morph class binary softmax output. A retrospective of morph synthesizing procedure aids us to speculate the ROI as regions around facial landmarks, particularly for the case of landmark-based morphing techniques. Moreover, our attention-based DNN is adapted to the wavelet space, where inputs of the network are coarse-to-fine spectral representations, 48 stacked wavelet sub-bands to be exact. We evaluate performance of the proposed framework using three datasets, VISAPP17, LMA, and MorGAN. In addition, as attention maps can be a robust indicator whether a probe image under investigation is genuine or counterfeit, we analyze the estimated attention maps for both a bona fide image and its corresponding morphed image. Finally, we present an ablation study on the efficacy of utilizing attention mechanism for the sake of morph detection.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128966524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Face Sketch Synthesis via Semantic-Driven Generative Adversarial Network 基于语义驱动生成对抗网络的人脸草图合成
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-29 DOI: 10.1109/IJCB52358.2021.9484393
Xingqun Qi, Muyi Sun, Weining Wang, Xiaoxiao Dong, Qi Li, Caifeng Shan
{"title":"Face Sketch Synthesis via Semantic-Driven Generative Adversarial Network","authors":"Xingqun Qi, Muyi Sun, Weining Wang, Xiaoxiao Dong, Qi Li, Caifeng Shan","doi":"10.1109/IJCB52358.2021.9484393","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484393","url":null,"abstract":"Face sketch synthesis has made significant progress with the development of deep neural networks in these years. The delicate depiction of sketch portraits facilitates a wide range of applications like digital entertainment and law enforcement. However, accurate and realistic face sketch generation is still a challenging task due to the illumination variations and complex backgrounds in the real scenes. To tackle these challenges, we propose a novel Semantic-Driven Generative Adversarial Network (SDGAN) which embeds global structure-level style injection and local class-level knowledge re-weighting. Specifically, we conduct facial saliency detection on the input face photos to provide overall facial texture structure, which could be used as a global type of prior information. In addition, we exploit face parsing layouts as the semantic-level spatial prior to enforce globally structural style injection in the generator of SDGAN. Furthermore, to enhance the realistic effect of the details, we propose a novel Adaptive Re-weighting Loss (ARLoss) which dedicates to balance the contributions of different semantic classes. Experimentally, our extensive experiments on CUFS and CUFSF datasets show that our proposed algorithm achieves state-of-the-art performance.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124306631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
MFR 2021: Masked Face Recognition Competition MFR 2021:蒙面人脸识别比赛
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-29 DOI: 10.1109/IJCB52358.2021.9484337
F. Boutros, N. Damer, J. Kolf, K. Raja, Florian Kirchbuchner, Raghavendra Ramachandra, Arjan Kuijper, Pengcheng Fang, Chao Zhang, Fei Wang, David Montero, N. Aginako, B. Sierra, M. Nieto, M. Erakin, U. Demir, Hazim Kemal, Ekenel, Asaki Kataoka, K. Ichikawa, Shizuma Kubo, J Zhang, Mingjie He, Dan Han, S. Shan, Klemen Grm, Vitomir vStruc, Sachith Seneviratne, Nuran Kasthuriarachchi, Sanka Rasnayaka, Pedro C. Neto, Ana F. Sequeira, João Ribeiro Pinto, M. Saffari, Jaime S. Cardoso
{"title":"MFR 2021: Masked Face Recognition Competition","authors":"F. Boutros, N. Damer, J. Kolf, K. Raja, Florian Kirchbuchner, Raghavendra Ramachandra, Arjan Kuijper, Pengcheng Fang, Chao Zhang, Fei Wang, David Montero, N. Aginako, B. Sierra, M. Nieto, M. Erakin, U. Demir, Hazim Kemal, Ekenel, Asaki Kataoka, K. Ichikawa, Shizuma Kubo, J Zhang, Mingjie He, Dan Han, S. Shan, Klemen Grm, Vitomir vStruc, Sachith Seneviratne, Nuran Kasthuriarachchi, Sanka Rasnayaka, Pedro C. Neto, Ana F. Sequeira, João Ribeiro Pinto, M. Saffari, Jaime S. Cardoso","doi":"10.1109/IJCB52358.2021.9484337","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484337","url":null,"abstract":"This paper presents a summary of the Masked Face Recognition Competitions (MFR) held within the 2021 International Joint Conference on Biometrics (IJCB 2021). The competition attracted a total of 10 participating teams with valid submissions. The affiliations of these teams are diverse and associated with academia and industry in nine different countries. These teams successfully submitted 18 valid solutions. The competition is designed to motivate solutions aiming at enhancing the face recognition accuracy of masked faces. Moreover, the competition considered the deployability of the proposed solutions by taking the compactness of the face recognition models into account. A private dataset representing a collaborative, multisession, real masked, capture scenario is used to evaluate the submitted solutions. In comparison to one of the topperforming academic face recognition solutions, 10 out of the 18 submitted solutions did score higher masked face verification accuracy.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116173719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
An End-to-End Autofocus Camera for Iris on the Move 移动中的光圈端到端自动对焦相机
2021 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2021-06-29 DOI: 10.1109/IJCB52358.2021.9484340
Leyuan Wang, Kunbo Zhang, Yunlong Wang, Zhenan Sun
{"title":"An End-to-End Autofocus Camera for Iris on the Move","authors":"Leyuan Wang, Kunbo Zhang, Yunlong Wang, Zhenan Sun","doi":"10.1109/IJCB52358.2021.9484340","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484340","url":null,"abstract":"For distant iris recognition, a long focal length lens is generally used to ensure the resolution of iris images, which reduces the depth of field and leads to potential defocus blur. To accommodate users standing statically at different distances, it is necessary to control focus quickly and accurately. And for users in motion, it is also expected to acquire a sufficient amount of accurately focused iris images. In this paper, we introduced a novel rapid auto-focus camera for active refocusing of the iris area of the moving objects with a focus-tunable lens. Our end-to-end computational algorithm can predict the best focus position from one single blurred image and generate the proper lens diopter control signal automatically. This scene-based active manipulation method enables real-time focus tracking of the iris area of a moving object. We built a testing bench to collect real-world focal stacks for evaluation of the autofocus methods. Our camera has reached an autofocus speed of over 50 fps. The results demonstrate the advantages of our proposed camera for biometric perception in static and dynamic scenes. The code is available at https://github.com/Debatrix/AquulaCam.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134184533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信