2020 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Cross-Spectral Iris Matching Using Conditional Coupled GAN 基于条件耦合GAN的交叉光谱虹膜匹配
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304929
Moktari Mostofa, Fariborz Taherkhani, J. Dawson, N. Nasrabadi
{"title":"Cross-Spectral Iris Matching Using Conditional Coupled GAN","authors":"Moktari Mostofa, Fariborz Taherkhani, J. Dawson, N. Nasrabadi","doi":"10.1109/IJCB48548.2020.9304929","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304929","url":null,"abstract":"Cross-spectral iris recognition is emerging as a promising biometric approach to authenticating the identity of individuals. However, matching iris images acquired at different spectral bands shows significant performance degradation when compared to single-band near-infrared (NIR) matching due to the spectral gap between iris images obtained in the NIR and visual-light (VIS) spectra. Although researchers have recently focused on deep-learning-based approaches to recover invariant representative features for more accurate recognition performance, the existing methods cannot achieve the expected accuracy required for commercial applications. Hence, in this paper, we propose a conditional coupled generative adversarial network (CpGAN) architecture for cross-spectral iris recognition by projecting the VIS and NIR iris images into a low-dimensional embedding domain to explore the hidden relationship between them. The conditional CpGAN framework consists of a pair of GAN-based networks, one responsible for retrieving images in the visible domain and other responsible for retrieving images in the NIR domain. Both networks try to map the data into a common embedding subspace to ensure maximum pair-wise similarity between the feature vectors from the two iris modalities of the same subject. To prove the usefulness of our proposed approach, extensive experimental results obtained on the PolyU dataset are compared to existing state-of-the-art cross-spectral recognition methods.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126397148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Inverse Biometrics: Reconstructing Grayscale Finger Vein Images from Binary Features 逆生物识别:利用二值特征重建灰度指静脉图像
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304866
Christof Kauba, Simon Kirchgasser, Vahid Mirjalili, A. Uhl, A. Ross
{"title":"Inverse Biometrics: Reconstructing Grayscale Finger Vein Images from Binary Features","authors":"Christof Kauba, Simon Kirchgasser, Vahid Mirjalili, A. Uhl, A. Ross","doi":"10.1109/IJCB48548.2020.9304866","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304866","url":null,"abstract":"In this work, we investigate the possibility of generating a grayscale image of the finger vein from its binary template. This exercise would allow us to determine the invertibility of finger vein templates, and this has implications in biometric security and privacy. While such an analysis has been undertaken in the context of face, fingerprint and iris templates, this is the first work involving the finger vein biometric trait. The transformation from binary features to a grayscale image is accomplished using a Pix2Pix Convolutional Neural Network (CNN). The reversibility of 6 different types of binary features is evaluated using this CNN. Further, a number of experiments are conducted using 7 distinct finger vein datasets. Results indicate that (a) it is possible to reconstruct finger vein images from their binary templates; (b) the reconstructed images can be used for biometric recognition purposes; (c) the CNN trained on one dataset can be successfully used for reconstructing images in a different dataset (cross-dataset reconstruction); and (d) the images reconstructed from one set of features can be successfully used to extract a different set of features for biometric recognition (cross-feature-set generalization).","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127657616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
International Joint Conference on Biometrics 生物计量学国际联合会议
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/ijcb48548.2020.9304853
{"title":"International Joint Conference on Biometrics","authors":"","doi":"10.1109/ijcb48548.2020.9304853","DOIUrl":"https://doi.org/10.1109/ijcb48548.2020.9304853","url":null,"abstract":"","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125870704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clustered Dynamic Graph CNN for Biometric 3D Hand Shape Recognition 聚类动态图CNN用于生物特征三维手形识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304894
Jan Svoboda, Pietro Astolfi, D. Boscaini, Jonathan Masci, M. Bronstein
{"title":"Clustered Dynamic Graph CNN for Biometric 3D Hand Shape Recognition","authors":"Jan Svoboda, Pietro Astolfi, D. Boscaini, Jonathan Masci, M. Bronstein","doi":"10.1109/IJCB48548.2020.9304894","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304894","url":null,"abstract":"The research in biometric recognition using hand shape has been somewhat stagnating in the last decade. Meanwhile, computer vision and machine learning have experienced a paradigm shift with the renaissance of deep learning, which has set the new state-of-the-art in many related fields. Inspired by successful applications of deep learning for other biometric modalities, we propose a novel approach to 3D hand shape recognition from RGB-D data based on geometric deep learning techniques. We show how to train our model on synthetic data and retain the performance on real samples during test time. To evaluate our method, we provide a new dataset NNHand RGB- D of short video sequences and show encouraging performance compared to diverse baselines on the new data, as well as current benchmark dataset HKPolyU. Moreover, the new dataset opens door to many new research directions in hand shape recognition.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129562908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Your Tattletale Gait Privacy Invasiveness of IMU Gait Data 对IMU步态数据的隐私侵犯
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304922
Sanka Rasnayaka, T. Sim
{"title":"Your Tattletale Gait Privacy Invasiveness of IMU Gait Data","authors":"Sanka Rasnayaka, T. Sim","doi":"10.1109/IJCB48548.2020.9304922","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304922","url":null,"abstract":"Modern personal devices measure and store vast amounts of sensory data such as Inertial Measurement Unit (IMU) data. These on-body sensor data can be used as a biometric by observing human movement (gait). People are less cautious about privacy vulnerabilities of such sensory data. We highlight which personal characteristics can be derived from on-body sensor data and the effect of sensor location towards these privacy invasions. By analyzing sensor locations with respect to privacy and utility we discover sensor locations which preserve utility such as biometric authentication while reducing privacy vulnerability. We have collected (1) a multi-stream on-body IMU dataset using 3 IMU sensors, consisting of 6 sensor locations, 6 actions along with various physical, personality and socio-economic characteristics from 53 participants. (2) an opinion survey of the relative importance of each attribute from 566 participants. Using these datasets we show that gait data reveals a lot of personal information, which maybe a privacy concern. The opinion survey reveals a ranking of the physical characteristics based on the perceived importance. Using a privacy vulnerability index we show that sensors located in the front pocket/wrist are more privacy invasive compared to back-pocket/bag which are less privacy invasive without a significant loss of utility as a biometric.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130475549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Progressive Stack Face-based Network for Detecting Diabetes Mellitus and Breast Cancer 一种用于糖尿病和乳腺癌检测的渐进式堆栈人脸网络
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304887
Jianhang Zhou, Qi Zhang, Bob Zhang
{"title":"A Progressive Stack Face-based Network for Detecting Diabetes Mellitus and Breast Cancer","authors":"Jianhang Zhou, Qi Zhang, Bob Zhang","doi":"10.1109/IJCB48548.2020.9304887","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304887","url":null,"abstract":"Currently, diabetes mellitus and breast cancer have become more widespread than ever before. Those suffering from these two types of diseases usually need a blood test or biopsy, where both extract fluids or tissues from the human body, which brings pain and a sense of discomfort. With the rise of medical biometrics, it is possible to perform non-invasive detection according to the biometric identifiers from the face of the patients. However, it is still difficult to simultaneously perform disease detection on both diabetes mellitus and breast cancer accurately. To resolve this issue, in this paper, we propose a progressive stack face-based network (PF-Net) to perform multi-class classification on diabetes mellitus, breast cancer, and healthy control using facial information. To perform diagnosis in a progressive way, a latent facial representation is first generated from a stacked sparse autoencoder. Later, the representation is fed into an ensemble layer containing several classifiers. Finally, only the effective classifiers are activated in the classification layer to make the final decision. The experiments showed our proposed method achieved an overall Accuracy of 92.94%, which outperforms a number of classification methods.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"6 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127298101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
All-in-Focus Iris Camera With a Great Capture Volume 全对焦虹膜相机与一个伟大的捕获量
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304932
Kunbo Zhang, Zhenteng Shen, Yunlong Wang, Zhenan Sun
{"title":"All-in-Focus Iris Camera With a Great Capture Volume","authors":"Kunbo Zhang, Zhenteng Shen, Yunlong Wang, Zhenan Sun","doi":"10.1109/IJCB48548.2020.9304932","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304932","url":null,"abstract":"Imaging volume of an iris recognition system has been restricting the throughput and cooperation convenience in biometric applications. Numerous improvement trials are still impractical to supersede the dominant fixed-focus lens in stand-off iris recognition due to incremental performance increase and complicated optical design. In this study, we develop a novel all-in-focus iris imaging system using a focus-tunable lens and a 2D steering mirror to greatly extend capture volume by spatiotemporal multiplexing method. Our iris imaging depth of field extension system requires no mechanical motion and is capable to adjust the focal plane at extremely high speed. In addition, the motorized reflection mirror adaptively steers the light beam to extend the horizontal and vertical field of views in an active manner. The proposed all-in-focus iris camera increases the depth of field up to 3.9 m which is afactor of 37.5 compared with conventional long focal lens. We also experimentally demonstrate the capability of this 3D light beam steering imaging system in real-time multi-person iris refocusing using dynamic focal stacks and the potential of continuous iris recognition for moving participants.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128652782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Finding the Suitable Doppelgänger for a Face Morphing Attack 为面部变形攻击找到合适的Doppelgänger
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304878
Alexander Röttcher, U. Scherhag, C. Busch
{"title":"Finding the Suitable Doppelgänger for a Face Morphing Attack","authors":"Alexander Röttcher, U. Scherhag, C. Busch","doi":"10.1109/IJCB48548.2020.9304878","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304878","url":null,"abstract":"ID cards are uniquely linked to one individual via a printed or electronically provided facial image. Even though the face is treated as universal and distinctive characteristic, twins can weaken this distinctiveness because of their biological similarity. Also, humans might falsely recognise an unknown person as a friend - colloquially named a Dop-pelgänger. Recently it was demonstrated that this biological effect of similar data subjects can be purposefully established between two individuals in order to improve the vulnerability of the so-called morphing attack. This image manipulation technique creates a melted facial image which is similar to two or more data subjects. If embedded into an ID card, the manipulated reference image can be used by all participating individuals and thus the concept of a unique link is broken. This work elaborates the rather neglected part of selecting morph pairs based on a similarity score instead of a simple random assignment. It discusses the applicability of different possible algorithms. The finally developed approach considers complex real-world constraints while being executable in a reasonable amount of time and producing acceptable large morph sets. It is shown that this algorithm greatly increases the vulnerability of automated face recognition systems. Surprisingly, it also proves that an effective pre-selection of pairs questions the need of in-depth optimized morphing algorithms.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132702058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Feature map masking based single-stage face detection 基于特征映射掩蔽的单阶段人脸检测
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304927
Xi Zhang, Junliang Chen, Weicheng Xie, Linlin Shen
{"title":"Feature map masking based single-stage face detection","authors":"Xi Zhang, Junliang Chen, Weicheng Xie, Linlin Shen","doi":"10.1109/IJCB48548.2020.9304927","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304927","url":null,"abstract":"Although great progress has been made in face detection, a trade-off between speed and accuracy is still a great challenge. We propose in this paper a feature map masking based approach for single-stage face detection. As feature maps extracted from feature pyramid network might contain face unrelated features, we propose a mask generation branch to predict those significant units for face detection. The masked feature maps, where only important features are left, are then passed through the following detection process. Ground truth masks, directly generated from the training images, based on the face bounding boxes, are used to train the feature mask generation module. A mask constrained dropout module has also been proposed to drop out significant units of the shared feature maps, such that the detection performance can be further improved. The proposed approach is extensively tested using the WIDER FACE dataset. The results suggest that our detector with ResNet-152 backbone, achieves the best precision-recall performance among competing methods. As high as 95.4%, 94.0% and 86.9% accuracies have been achieved on the easy, medium and hard subsets, respectively.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"606 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132272475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FEBA - An Anatomy Based Finger Vein Classification FEBA -一种基于解剖学的手指静脉分类方法
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304889
Arya Krishnan, G. Nayar, Tony Thomas, N. Nystrom
{"title":"FEBA - An Anatomy Based Finger Vein Classification","authors":"Arya Krishnan, G. Nayar, Tony Thomas, N. Nystrom","doi":"10.1109/IJCB48548.2020.9304889","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304889","url":null,"abstract":"Finger vein identification has become a promising biometric modality due to its anti-spoofing capability, time-invariant nature, privacy and security when compared to other predominant biometric traits. In the wake of the recent epidemics and pandemics, the world has recognized the need for hygienic and contactless identification techniques such as finger vein. Although finger vein biometrics has been around for some time, there doesn't exist any classification scheme for finger vein images similar to the Henry classes for fingerprints. For large scale biometric identification systems, an accurate and consistent classification mechanism can significantly reduce the search space and time for matching. In this paper, we first show that finger vein patterns can be classified into four classes namely, Fork, Eye, Bridge and Arch (FEBA) and then propose an identification scheme based on this classification. To the best of our knowledge, this is the first-ever attempt on classifying finger vein images based on intrinsic anatomical features. We obtained a classification accuracy of 95.88% using convolutional neural network and an average reduction of 86.89% in matching time on a heterogeneous database consisting of 4 different datasets. Cross dataset validation and comparison with existing algorithms have been performed to show the efficacy of the proposed classification and matching mechanism.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134376234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信