2019 International Conference on Biometrics (ICB)最新文献

筛选
英文 中文
Seg-Edge Bilateral Constraint Network for Iris Segmentation 虹膜分割的Seg-Edge双边约束网络
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987405
Junxing Hu, Hui Zhang, Lihu Xiao, Jing Liu, Xingguang Li, Zhaofeng He, Ling Li
{"title":"Seg-Edge Bilateral Constraint Network for Iris Segmentation","authors":"Junxing Hu, Hui Zhang, Lihu Xiao, Jing Liu, Xingguang Li, Zhaofeng He, Ling Li","doi":"10.1109/ICB45273.2019.8987405","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987405","url":null,"abstract":"Iris semantic segmentation in less-constrained scenarios is the basis of iris recognition. We propose an end-to-end trainable model for iris segmentation, namely Seg-Edge bilateral constraint network (SEN). The SEN uses the edge map and the coarse segmentation to constrain and optimize mutually to produce accurate iris segmentation results. The iris edge map generated from low level convolutional layers passes detailed edge information to iris segmentation, and the iris region generated by high level semantic segmentation constrains the edge filtering scope which makes the edge aware focusing on interesting objects. Moreover, we propose pruning filters and corresponding feature maps that are identified as useless by l1-norm, which results in a lightweight iris segmentation network while keeping the performance almost intact or even better. Experimental results suggest that the proposed method outperforms the state-of-the-art iris segmentation methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131933946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Face Anti-spoofing using Hybrid Residual Learning Framework 基于混合残差学习框架的人脸防欺骗
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987283
Usman Muhammad, A. Hadid
{"title":"Face Anti-spoofing using Hybrid Residual Learning Framework","authors":"Usman Muhammad, A. Hadid","doi":"10.1109/ICB45273.2019.8987283","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987283","url":null,"abstract":"Face spoofing attacks have received significant attention because of criminals who are developing different techniques such as warped photos, cut photos, 3D masks, etc. to easily fool the face recognition systems. In order to improve the security measures of biometric systems, deep learning models offer powerful solutions; but to attain the benefits of multilayer features remains a significant challenge. To alleviate this limitation, this paper presents a hybrid framework to build the feature representation by fusing ResNet with more discriminative power. First, two variants of the residual learning framework are selected as deep feature extractors to extract informative features. Second, the fullyconnected layers are used as separated feature descriptors. Third, PCA based Canonical correlation analysis (CCA) is proposed as a feature fusion strategy to combine relevant information and to improve the features’ discrimination capacity. Finally, the support vector machine (SVM) is used to construct the final representation of facial features. Experimental results show that our proposed framework achieves a state-of-the-art performance without finetuning, data augmentation or coding strategy on benchmark databases, namely the MSU mobile face spoof database and the CASIA face anti-spoofing database.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"os-53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127846789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Mobile Face Recognition Systems: Exploring Presentation Attack Vulnerability and Usability 移动人脸识别系统:探索表示攻击漏洞和可用性
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987404
H. Hofbauer, L. Debiasi, A. Uhl
{"title":"Mobile Face Recognition Systems: Exploring Presentation Attack Vulnerability and Usability","authors":"H. Hofbauer, L. Debiasi, A. Uhl","doi":"10.1109/ICB45273.2019.8987404","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987404","url":null,"abstract":"We have evaluated face recognition software to be used with hand held devices (smartphones). While we can not go into specifics of the systems under test (due to NDAs), we can present the results of our evaluation of liveness detection (or presentation attack detection), matching performance, and success with different complexity levels of attacks. We will contrast the robustness against presentation attacks with the systems usability during regular use, and highlight where currently state of commercial of the shelf systems (COTS) stand in that regard. We will look at the results specifically under the tradeoff between acceptance, linked with usability, and security, which usually negatively impacts usability.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129252572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Extent of Longitudinal Finger Rotation in Publicly Available Finger Vein Data Sets 公开可用的指静脉数据集中手指纵向旋转的程度
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987384
B. Prommegger, Christof Kauba, A. Uhl
{"title":"On the Extent of Longitudinal Finger Rotation in Publicly Available Finger Vein Data Sets","authors":"B. Prommegger, Christof Kauba, A. Uhl","doi":"10.1109/ICB45273.2019.8987384","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987384","url":null,"abstract":"Finger vein recognition deals with the identification of a subjects based on its venous pattern within the fingers. The majority of the publicly available finger vein data sets has been acquired with the help of scanner devices that capture a single finger from the palmar side using light transmission. Some of them are equipped with a contact surface or other structures to support in finger placement. However, these means are not able to prevent all possible types of finger misplacements, in particular longitudinal finger rotation can not be averted. It has been shown that this type of finger rotation results in a non-linear deformation of the vein structure, causing severe problems to finger vein recognition systems. So far it is not known if and to which extent this longitudinal finger rotation is present in publicly available finger vein data sets. This paper evaluates the presence of longitudinal finger rotation and its extent in four publicly available finger vein data sets and provides the estimated rotation angles to the scientific public. This additional information will increase the value of the evaluated data sets. To verify the correctness of the estimated rotation angles, we furthermore demonstrate that employing a simple rotation correction, using those rotation angles, improves the recognition performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126503469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Cross-spectrum thermal to visible face recognition based on cascaded image synthesis 基于级联图像合成的跨光谱热到可见人脸识别
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987347
Khawla Mallat, N. Damer, F. Boutros, Arjan Kuijper, J. Dugelay
{"title":"Cross-spectrum thermal to visible face recognition based on cascaded image synthesis","authors":"Khawla Mallat, N. Damer, F. Boutros, Arjan Kuijper, J. Dugelay","doi":"10.1109/ICB45273.2019.8987347","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987347","url":null,"abstract":"Face synthesis from thermal to visible spectrum is fundamental to perform cross-spectrum face recognition as it simplifies its integration in existing commercial face recognition systems and enables manual face verification. In this paper, a new solution based on cascaded refinement networks is proposed. This method generates visible-like colored images of high visual quality without requiring large amounts of training data. By employing a contextual loss function during training, the proposed network is inherently scale and rotation invariant. We discuss the visual perception of the generated visible-like faces in comparison with recent works. We also provide an objective evaluation in terms of cross-spectrum face recognition, where the generated faces were compared against a gallery in visible spectrum using two state-of-the-art deep learning based face recognition systems. When compared to the recently published TV-GAN solution, the performance of the face recognition systems, OpenFace and LightCNN, was improved by a 42.48% (i.e. from 10.76% to 15.37%) and a 71.43% (i.e. from 33.606% to 57.612%), respectively.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121401697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Deep Learning from 3DLBP Descriptors for Depth Image Based Face Recognition 基于深度图像人脸识别的3DLBP描述符深度学习
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987432
João Baptista Cardia Neto, A. Marana, C. Ferrari, S. Berretti, A. Bimbo
{"title":"Deep Learning from 3DLBP Descriptors for Depth Image Based Face Recognition","authors":"João Baptista Cardia Neto, A. Marana, C. Ferrari, S. Berretti, A. Bimbo","doi":"10.1109/ICB45273.2019.8987432","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987432","url":null,"abstract":"In this paper, we propose a new framework for face recognition from depth images, which is both effective and efficient. It consists of two main stages: First, a handcrafted low-level feature extractor is applied to the raw depth data of the face, thus extracting the corresponding descriptor images (DIs); Then, a not-so-deep (shallow) convolutional neural network (SCNN) has been designed that learns from the DIs. This architecture showed two main advantages over the direct application of deep-CNN (DCNN) to the depth images of the face: On the one hand, the DIs are capable of enriching the raw depth data, emphasizing relevant traits of the face, while reducing their acquisition noise. This resulted decisive in improving the learning capability of the network; On the other, the DIs capture low-level features of the face, thus playing the role for the SCNN as the first layers do in a DCNN architecture. In this way, the SCNN we have designed has much less layers and can be trained more easily and faster. Extensive experiments on low- and high-resolution depth face datasets confirmed us the above advantages, showing results that are comparable or superior to the state-of-the-art, using by far less training data, time, and memory occupancy of the network.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127965012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Understanding Confounding Factors in Face Detection and Recognition 理解人脸检测与识别中的混杂因素
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987419
Janet Anderson, C. Otto, Brianna Maze, N. Kalka, James A. Duncan
{"title":"Understanding Confounding Factors in Face Detection and Recognition","authors":"Janet Anderson, C. Otto, Brianna Maze, N. Kalka, James A. Duncan","doi":"10.1109/ICB45273.2019.8987419","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987419","url":null,"abstract":"Currently, face recognition systems perform at or above human-levels on media captured under controlled conditions. However, confounding factors such as pose, illumination, and expression (PIE), as well as facial hair, gender, skin tone, age, and resolution, can degrade performance, especially when large variations are present. We utilize the IJB-C dataset to investigate the impact of confounding factors on both face detection accuracy and face verification genuine matcher scores. Since IJB-C was collected without the use of a face detector, it can be used to evaluate face detection performance, and it contains large variations in pose, illumination, expression, and other factors. We also use a linear regression model analysis to identify which confounding factors are most influential for face verification performance.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"3 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132547034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Crafting A Panoptic Face Presentation Attack Detector 制作一个全景面部呈现攻击检测器
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987257
Suril Mehta, A. Uberoi, Akshay Agarwal, Mayank Vatsa, Richa Singh
{"title":"Crafting A Panoptic Face Presentation Attack Detector","authors":"Suril Mehta, A. Uberoi, Akshay Agarwal, Mayank Vatsa, Richa Singh","doi":"10.1109/ICB45273.2019.8987257","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987257","url":null,"abstract":"With the advancements in technology and growing popularity of facial photo editing in the social media landscape, tools such as face swapping and face morphing have become increasingly accessible to the general public. It opens up the possibilities for different kinds of face presentation attacks, which can be taken advantage of by impostors to gain unauthorized access of a biometric system. Moreover, the wide availability of 3D printers has caused a shift from print attacks to 3D mask attacks. With increasing types of attacks, it is necessary to come up with a generic and ubiquitous algorithm with a panoptic view of these attacks, and can detect a spoofed image irrespective of the method used. The key contribution of this paper is designing a deep learning based panoptic algorithm for detection of both digital and physical presentation attacks using Cross Asymmetric Loss Function (CALF). The performance is evaluated for digital and physical attacks in three scenarios: ubiquitous environment, individual databases, and cross-attack/cross-database. Experimental results showcase the superior performance of the proposed presentation attack detection algorithm.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133478998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A New Approach for EEG-Based Biometric Authentication Using Auditory Stimulation 听觉刺激下基于脑电图的生物识别认证新方法
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987271
Sherif Nagib Abbas Seha, D. Hatzinakos
{"title":"A New Approach for EEG-Based Biometric Authentication Using Auditory Stimulation","authors":"Sherif Nagib Abbas Seha, D. Hatzinakos","doi":"10.1109/ICB45273.2019.8987271","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987271","url":null,"abstract":"In this paper, a new approach is followed for the human recognition task using brainwave responses to auditory stimulation. A system based on this class of brainwaves benefits extra features over conventional traits being more secure, harder to spoof, and cancelable. For this purpose, EEG signals were recorded from 21 subjects while listening to modulated auditory tones in a single- and two-session setups. Three different types of features were evaluated based on the energy and the entropy estimation of the EEG sub-band rhythms using narrow band Gaussian filtering and wavelet packet decomposition. These features are classified using discriminant analysis in identification and verification modes of authentication. Based on the achieved results, high recognition rates up to 97.18% and low error rates down to 4.3% were achieved in single session setup. Moreover, in a two-session setup, the proposed system in this paper is shown to be more time-permanent in comparison to previous works.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114517495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Alignment Free and Distortion Robust Iris Recognition 无对准和失真鲁棒虹膜识别
2019 International Conference on Biometrics (ICB) Pub Date : 2019-06-01 DOI: 10.1109/ICB45273.2019.8987369
Min Ren, Caiyong Wang, Yunlong Wang, Zhenan Sun, T. Tan
{"title":"Alignment Free and Distortion Robust Iris Recognition","authors":"Min Ren, Caiyong Wang, Yunlong Wang, Zhenan Sun, T. Tan","doi":"10.1109/ICB45273.2019.8987369","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987369","url":null,"abstract":"Iris recognition is a reliable personal identification method but there is still much room to improve its accuracy especially in less-constrained situations. For example, free movement of head pose may cause large rotation difference between iris images. And illumination variations may cause irregular distortion of iris texture. To match intra-class iris images with head rotation robustly, the existing solutions usually need a precise alignment operation by exhaustive search within a determined range in iris image preprosessing or brute-force searching the minimum Hamming distance in iris feature matching. In the wild enviroments, iris rotation is of much greater uncertainty than that in constrained situations and exhaustive search within a determined range is impracticable. This paper presents a unified feature-level solution to both alignment free and distortion robust iris recognition in the wild. A new deep learning based method named Alignment Free Iris Network (AFINet) is proposed, which utilizes a trainable VLAD (Vector of Locally Aggregated Descriptors) encoder called NetVLAD [18] to decouple the correlations between local representations and their spatial positions. And deformable convolution [5] is leveraged to overcome iris texture distortion by dense adaptive sampling. The results of extensive experiments on three public iris image databases and the simulated degradation databases show that AFINet significantly outperforms state-of-art iris recognition methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126885213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信