IET BiometricsPub Date : 2022-07-12DOI: 10.1049/bme2.12086
Zhitao Wu, Hongxu Qu, Haigang Zhang, Jinfeng Yang
{"title":"Robust graph fusion and recognition framework for fingerprint and finger-vein","authors":"Zhitao Wu, Hongxu Qu, Haigang Zhang, Jinfeng Yang","doi":"10.1049/bme2.12086","DOIUrl":"https://doi.org/10.1049/bme2.12086","url":null,"abstract":"<p>The human finger is the essential carrier of biometric features. The finger itself contains multi-modal traits, including fingerprint and finger-vein, which provides convenience and practicality for finger bi-modal fusion recognition. The scale inconsistency and feature space mismatch of finger bi-modal images are important reasons for the fusion effect. The feature extraction method based on graph structure can well solve the problem of feature space mismatch for the finger bi-modalities, and the end-to-end fusion recognition can be realised based on graph convolutional neural networks (GCNs). However, this fusion recognition strategy based on GCNs still has two urgent problems: first, lack of stable and efficient graph fusion method; second, over-smoothing problem of GCNs will lead to the degradation of recognition performance. A novel fusion method is proposed to integrate the graph features of fingerprint (FP) and finger-vein (FV). Furthermore, we analyse the inner relationship between the information transmission process and the over-smoothing problem in GCNs from an optimisation perspective, and point out that the differentiated information between neighbouring nodes decreases as the number of layers increases, which is the direct reason for the over-smoothing problem. A modified deep graph convolution neural network is proposed, aiming to alleviate the over-smoothing problem. The intuition is that the differentiated features of the nodes should be properly preserved to ensure the uniqueness of the nodes themselves. Thus, a constraint term to the objective function of the GCN is added to emphasise the differentiation features of the nodes themselves. The experimental results show that the proposed fusion method can achieve more satisfied performance in finger bi-modal biometric recognition, and the proposed constrained GCN can well alleviate the problem of over-smoothing.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"13-24"},"PeriodicalIF":2.0,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12086","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50129907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-07-07DOI: 10.1049/bme2.12089
Michele Nappi, Hugo Proença, Guodong Guo, Sambit Bakshi
{"title":"25th ICPR—Real-time Visual Surveillance as-a-Service (VSaaS) for smart security solutions","authors":"Michele Nappi, Hugo Proença, Guodong Guo, Sambit Bakshi","doi":"10.1049/bme2.12089","DOIUrl":"10.1049/bme2.12089","url":null,"abstract":"<p>With the advent of ever-fast computing, real-time processing of visual data has been gaining importance in the field of surveillance. Also, automated decision-making by visual surveillance systems has been contributing to a huge leap in the capability of such systems, and of course their relevance in social security.</p><p>This special issue aimed to discuss cloud-based architectures of surveillance frameworks as a service. Such systems, especially when deployed to work in real-time, are required to be fast, efficient, and sustainable with a varying load of visual data.</p><p>Four papers were selected for inclusion in this special issue.</p><p>Wyzykowski et al. present an approach to synthesize realistic, multiresolution and multisensor fingerprints. Based in Anguli, a handcrafted fingerprint generator, they were able to obtain dynamic ridge maps with sweat pores and scratches. Then, a CycleGAN network was trained to transform these maps into realistic fingerprints. Unlike other CNN-based works, this framework is able to generate images with different resolutions and styles for the same identity. Finally, authors conducted a human perception analysis where 60 volunteers could hardly differentiate between real and high-resolution synthesized fingerprints.</p><p>Pawar and Attar address the problem of detection and localization of anomalies in surveillance videos, using pipelined deep autoencoders and one-class learning. Specifically, they used a convolutional autoencoder and sequence-to-sequence long short-term memory autoencoder in a pipelined fashion for spatial and temporal learning of the videos, respectively. In this setting, the principle of one-class classification for training the model on normal data and testing it on anomalous testing data was followed.</p><p>Tawfik Mohammed et al. describe a framework, implemented in a RAD (Rapid Application Development) paradigm, for performing iris recognition tests, based in the well-known Daugman's processing chain. They start by segmenting the iris ring using the Integro-differential operator, along with an edge-based Hough transform to isolate eyelids and eyelashes. After the normalization of the data (pseudo-polar domain), the features are encoded using 1D log Gabor kernel. Finally, the matching step is carried out using the Hamming distance.</p><p>Barra et al. describe an approach for automated head pose estimation that stems from a previous Partitioned Iterated Function Systems (PIFS)-based approach providing state-of-the-art accuracy with high computing cost and improve it by means of two regression models, namely Gradient Boosting Regressor and Extreme Gradient Boosting Regressor, achieving much faster response and an even lower mean absolute error on the yaw and roll axis, as shown by experiments conducted on the BIWI and AFLW2000 datasets.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"277-278"},"PeriodicalIF":2.0,"publicationDate":"2022-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89783408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-06-27DOI: 10.1049/bme2.12088
Raghavendra Mudgalgundurao, Patrick Schuch, Kiran Raja, Raghavendra Ramachandra, Naser Damer
{"title":"Pixel-wise supervision for presentation attack detection on identity document cards","authors":"Raghavendra Mudgalgundurao, Patrick Schuch, Kiran Raja, Raghavendra Ramachandra, Naser Damer","doi":"10.1049/bme2.12088","DOIUrl":"10.1049/bme2.12088","url":null,"abstract":"<p>Identity documents (or IDs) play an important role in verifying the identity of a person with wide applications in banks, travel, video-identification services and border controls. Replay or photocopied ID cards can be misused to pass ID control in unsupervised scenarios if the liveness of a person is not checked. To detect such presentation attacks on ID card verification process when presented virtually is a critical step for the biometric systems to assure authenticity. In this paper, a pixel-wise supervision on DenseNet is proposed to detect presentation attacks of the printed and digitally replayed attacks. The authors motivate the approach to use pixel-wise supervision to leverage minute cues on various artefacts such as moiré patterns and artefacts left by the printers. The baseline benchmark is presented using different handcrafted and deep learning models on a newly constructed in-house database obtained from an operational system consisting of 886 users with 433 bona fide, 67 print and 366 display attacks. It is demonstrated that the proposed approach achieves better performance compared to handcrafted features and Deep Models with an Equal Error Rate of 2.22% and Bona fide Presentation Classification Error Rate (BPCER) of 1.83% and 1.67% at Attack Presentation Classification Error Rate of 5% and 10%.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"383-395"},"PeriodicalIF":2.0,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12088","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72496186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-06-27DOI: 10.1049/bme2.12082
Zohra Rezgui, Amina Bassit, Raymond Veldhuis
{"title":"Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation","authors":"Zohra Rezgui, Amina Bassit, Raymond Veldhuis","doi":"10.1049/bme2.12082","DOIUrl":"10.1049/bme2.12082","url":null,"abstract":"<p>Most deep learning-based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among models performing the same task. However, models performing different tasks but sharing the same input space and model architecture were never considered in the transferability scenarios presented in the literature. In this paper, this phenomenon was analysed in the context of VGG16-based and ResNet50-based biometric classifiers. The authors investigate the impact of two white-box attacks on a gender classifier and contrast a defence method as a countermeasure. Then, using adversarial images generated by the attacks, a pre-trained face recognition classifier is attacked in a black-box fashion. Two verification comparison settings are employed, in which images perturbed with the same and different magnitude of the perturbation are compared. The authors’ results indicate transferability in the fixed perturbation setting for a Fast Gradient Sign Method attack and non-transferability in a pixel-guided denoiser attack setting. The interpretation of this non-transferability can support the use of fast and train-free adversarial attacks targeting soft biometric classifiers as means to achieve soft biometric privacy protection while maintaining facial identity as utility.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"407-419"},"PeriodicalIF":2.0,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88686270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-06-27DOI: 10.1049/bme2.12087
Simon Parkinson, Saad Khan, Alexandru-Mihai Badea, Andrew Crampton, Na Liu, Qing Xu
{"title":"An empirical analysis of keystroke dynamics in passwords: A longitudinal study","authors":"Simon Parkinson, Saad Khan, Alexandru-Mihai Badea, Andrew Crampton, Na Liu, Qing Xu","doi":"10.1049/bme2.12087","DOIUrl":"https://doi.org/10.1049/bme2.12087","url":null,"abstract":"<p>The use of keystroke timings as a behavioural biometric in fixed-text authentication mechanisms has been extensively studied. Previous research has investigated in isolation the effect of password length, character substitution, and participant repetition. These studies have used publicly available datasets, containing a small number of passwords with timings acquired from different experiments. Multiple experiments have also used the participant's first and last name as the password; however, this is not realistic of a password system. Not only is the user's name considered a weak password, but their familiarity with typing the phrase minimises variation in acquired samples as they become more familiar with the new password. Furthermore, no study has considered the combined impact of length, substitution, and repetition using the same participant pool. This is explored in this work, where the authors collected timings for 65 participants, when typing 40 passwords with varying characteristics, 4 times per week for 8 weeks. A total of 81,920 timing samples were processed using an instance-based distance and threshold matching approach. Results of this study provide empirical insight into how a password policy should be created to maximise the accuracy of the biometric system when considering substitution type and longitudinal effects.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"25-37"},"PeriodicalIF":2.0,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50145548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-06-16DOI: 10.1049/bme2.12075
Amina Bassit, Florian Hahn, Raymond Veldhuis, Andreas Peter
{"title":"Hybrid biometric template protection: Resolving the agony of choice between bloom filters and homomorphic encryption","authors":"Amina Bassit, Florian Hahn, Raymond Veldhuis, Andreas Peter","doi":"10.1049/bme2.12075","DOIUrl":"10.1049/bme2.12075","url":null,"abstract":"<p>Bloom filters (BFs) and homomorphic encryption (HE) are prominent techniques used to design biometric template protection (BTP) schemes that aim to protect sensitive biometric information during storage and biometric comparison. However, the pros and cons of BF- and HE-based BTPs are not well studied in literature. We investigate the strengths and weaknesses of these two approaches since both seem promising from a theoretical viewpoint. Our key insight is to extend our theoretical investigation to cover the practical case of iris recognition on the ground that iris (1) benefits from the alignment-free property of BFs and (2) induces huge computational burdens when implemented in the HE-encrypted domain. BF-based BTPs can be implemented to be either fast with high recognition accuracy while missing the important privacy property of ‘unlinkability’, or to be fast with unlinkability-property while missing the high accuracy. HE-based BTPs, on the other hand, are highly secure, achieve good accuracy, and meet the unlinkability-property, but they are much slower than BF-based approaches. As a synthesis, we propose a hybrid BTP scheme that combines the good properties of BFs and HE, ensuring unlinkability and high recognition accuracy, while being about seven times faster than the traditional HE-based approach.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"430-444"},"PeriodicalIF":2.0,"publicationDate":"2022-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90056623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-06-13DOI: 10.1049/bme2.12081
Jinxiao Zhong, Liangnian Jin, Ran Wang
{"title":"Point-convolution-based human skeletal pose estimation on millimetre wave frequency modulated continuous wave multiple-input multiple-output radar","authors":"Jinxiao Zhong, Liangnian Jin, Ran Wang","doi":"10.1049/bme2.12081","DOIUrl":"10.1049/bme2.12081","url":null,"abstract":"<p>Compared with traditional approaches that used vision sensors which can provide a high-resolution representation of targets, millimetre-wave radar is robust to scene lighting and weather conditions, and has more applications. Current methods of human skeletal pose estimation can reconstruct targets, but they lose the spatial information or don't take the density of point cloud into consideration. We propose a skeletal pose estimation method that combines point convolution to extract features from the point cloud. By extracting the local information and density of each point in the point cloud of the target, the spatial location and structure information of the target can be obtained, and the accuracy of the pose estimation is increased. The extraction of point cloud features is based on point-by-point convolution, that is, different weights are applied to different features of each point, which also increases the nonlinear expression ability of the model. Experiments show that the proposed approach is effective. We offer more distinct skeletal joints and a lower mean absolute error, average localisation errors of 6.1 cm in <i>X</i>, 3.5 cm in <i>Y</i> and 3.3 cm in <i>Z</i>, respectively.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"333-342"},"PeriodicalIF":2.0,"publicationDate":"2022-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91101921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-06-07DOI: 10.1049/bme2.12084
Jose Maureira, Juan E. Tapia, Claudia Arellano, Christoph Busch
{"title":"Analysis of the synthetic periocular iris images for robust Presentation Attacks Detection algorithms","authors":"Jose Maureira, Juan E. Tapia, Claudia Arellano, Christoph Busch","doi":"10.1049/bme2.12084","DOIUrl":"10.1049/bme2.12084","url":null,"abstract":"<p>The LivDet-2020 competition focuses on Presentation Attacks Detection (PAD) algorithms, has still open problems, mainly unknown attack scenarios. It is crucial to enhance PAD methods. This can be achieved by augmenting the number of Presentation Attack Instruments (PAI) and Bona fide (genuine) images used to train such algorithms. Unfortunately, the capture and creation of PAI and even the capture of Bona fide images are sometimes complex to achieve. The generation of synthetic images with Generative Adversarial Networks (GAN) algorithms may help and has shown significant improvements in recent years. This paper presents a benchmark of GAN methods to achieve a novel synthetic PAI from a small set of periocular near-infrared images. The best PAI was obtained using StyleGAN2, and it was tested using the best PAD algorithm from the LivDet-2020. The synthetic PAI was able to fool such an algorithm. As a result, all images were classified as Bona fide. A MobileNetV2 was trained using the synthetic PAI as a new class to achieve a more robust PAD. The resulting PAD was able to classify 96.7% of synthetic images as attacks. BPCER<sub>10</sub> was 0.24%. Such results demonstrated the need for PAD algorithms to be constantly updated and trained with synthetic images.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"343-354"},"PeriodicalIF":2.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12084","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82133166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-06-03DOI: 10.1049/bme2.12083
Andre Brasil Vieira Wyzykowski, Mauricio Pamplona Segundo, Rubisley de Paula Lemes
{"title":"Multiresolution synthetic fingerprint generation","authors":"Andre Brasil Vieira Wyzykowski, Mauricio Pamplona Segundo, Rubisley de Paula Lemes","doi":"10.1049/bme2.12083","DOIUrl":"10.1049/bme2.12083","url":null,"abstract":"<p>Public access to existing high-resolution databases was discontinued. Besides, a hybrid database that contains fingerprints of different sensors with high and medium resolutions does not exist. A novel hybrid approach to synthesise realistic, multiresolution, and multisensor fingerprints to address these issues is presented. The first step was to improve Anguli, a handcrafted fingerprint generator, to create pores, scratches, and dynamic ridge maps. Using CycleGAN, then the maps are converted into realistic fingerprints, adding textures to images. Unlike other neural network-based methods, the authors’ method generates multiple images with different resolutions and styles for the same identity. With the authors’ approach, a synthetic database with 14,800 fingerprints is built. Besides that, fingerprint recognition experiments with pore- and minutiae-based matching techniques and different fingerprint quality analyses are conducted to confirm the similarity between real and synthetic databases. Finally, a human classification analysis is performed, where volunteers could not distinguish between authentic and synthetic fingerprints. These experiments demonstrate that the authors’ approach is suitable for supporting further fingerprint recognition studies in the absence of real databases.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"314-332"},"PeriodicalIF":2.0,"publicationDate":"2022-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77469700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-05-27DOI: 10.1049/bme2.12080
Chaoying Tang, Mengen Qian, Ru Jia, Haodong Liu, Biao Wang
{"title":"Forearm multimodal recognition based on IAHP-entropy weight combination","authors":"Chaoying Tang, Mengen Qian, Ru Jia, Haodong Liu, Biao Wang","doi":"10.1049/bme2.12080","DOIUrl":"https://doi.org/10.1049/bme2.12080","url":null,"abstract":"<p>Biometrics are the among most popular authentication methods due to their advantages over traditional methods, such as higher security, better accuracy and more convenience. The recent COVID-19 pandemic has led to the wide use of face masks, which greatly affects the traditional face recognition technology. The pandemic has also increased the focus on hygienic and contactless identity verification methods. The forearm is a new biometric that contains discriminative information. In this paper, we proposed a multimodal recognition method that combines the veins and geometry of a forearm. Five features are extracted from a forearm Near-Infrared (Near-Infrared) image: SURF, local line structures, global graph representations, forearm width feature and forearm boundary feature. These features are matched individually and then fused at the score level based on the Improved Analytic Hierarchy Process-entropy weight combination. Comprehensive experiments were carried out to evaluate the proposed recognition method and the fusion rule. The matching results showed that the proposed method can achieve a satisfactory performance.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"52-63"},"PeriodicalIF":2.0,"publicationDate":"2022-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50146449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}