{"title":"Locality preserving binary face representations using auto-encoders","authors":"Mohamed Amine Hmani, Dijana Petrovska-Delacrétaz, Bernadette Dorizzi","doi":"10.1049/bme2.12096","DOIUrl":"10.1049/bme2.12096","url":null,"abstract":"<p>Crypto-biometric schemes, such as fuzzy commitment, require binary sources. A novel approach to binarising biometric data using Deep Neural Networks applied to facial biometric data is introduced. The binary representations are evaluated on the MOBIO and the Labelled Faces in the Wild databases, where their biometric recognition performance and entropy are measured. The proposed binary embeddings give a state-of-the-art performance on both databases with almost negligible degradation compared to the baseline. The representations' length can be controlled. Using a pretrained convolutional neural network and training the model on a cleaned version of the MS-celeb-1M database, binary representations of length 4096 bits and 3300 bits of entropy are obtained. The extracted representations have high entropy and are long enough to be used in crypto-biometric systems, such as fuzzy commitment. Furthermore, the proposed approach is data-driven and constitutes a locality preserving hashing that can be leveraged for data clustering and similarity searches. As a use case of the binary representations, a cancellable system is created based on the binary embeddings using a shuffling transformation with a randomisation key as a second factor.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"445-458"},"PeriodicalIF":2.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12096","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76420898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-10-03DOI: 10.1049/bme2.12099
Enrique Argones Rúa, Tim Van hamme, Davy Preuveneers, Wouter Joosen
{"title":"Discriminative training of spiking neural networks organised in columns for stream-based biometric authentication","authors":"Enrique Argones Rúa, Tim Van hamme, Davy Preuveneers, Wouter Joosen","doi":"10.1049/bme2.12099","DOIUrl":"10.1049/bme2.12099","url":null,"abstract":"<p>Stream-based biometric authentication using a novel approach based on spiking neural networks (SNNs) is addressed. SNNs have proven advantages regarding energy consumption and they are a perfect match with some proposed neuromorphic hardware chips, which can lead to a broader adoption of user device applications of artificial intelligence technologies. One of the challenges when using SNNs is the discriminative training of the network since it is not straightforward to apply the well-known error backpropagation (EBP), massively used in traditional artificial neural networks (ANNs). A network structure based on neuron columns is proposed, resembling cortical columns in the human cortex, and a new derivation of error backpropagation for the spiking neural networks that integrate the lateral inhibition in these structures. The potential of the proposed approach is tested in the task of inertial gait authentication, where gait is quantified as signals from Inertial Measurement Units (IMU), and the authors' approach to state-of-the-art ANNs is compared. In the experiments, SNNs provide competitive results, obtaining a difference of around 1% in half total error rate when compared to state-of-the-art ANNs in the context of IMU-based gait authentication.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"485-497"},"PeriodicalIF":2.0,"publicationDate":"2022-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12099","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75585054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-09-21DOI: 10.1049/bme2.12100
Cheng Gong, Jing Liu, Ming Gong, Jingbing Li, Uzair Aslam Bhatti, Jixin Ma
{"title":"Robust medical zero-watermarking algorithm based on Residual-DenseNet","authors":"Cheng Gong, Jing Liu, Ming Gong, Jingbing Li, Uzair Aslam Bhatti, Jixin Ma","doi":"10.1049/bme2.12100","DOIUrl":"10.1049/bme2.12100","url":null,"abstract":"<p>To solve the problem of poor robustness of existing traditional DCT-based medical image watermarking algorithms under geometric attacks, a novel deep learning-based robust zero-watermarking algorithm for medical images is proposed. A Residual-DenseNet is designed, which took low-frequency features after discrete cosine transformation of medical images as labels and applied skip connections and a new objective function to strengthen and extract high-level semantic features that can effectively distinguish different medical images and binarise them to get robust hash vectors. Then, these hash vectors are bound with the chaotically encrypted watermark to generate the corresponding keys to complete the generation of watermark. The proposed algorithm neither modified the original medical image in the watermark generation stage nor required the original medical image in the watermark extraction stage. Moreover, the proposed algorithm is also suitable for multiple watermarks. Experimental results show that the proposed algorithm has good robust performance under both conventional and geometric attacks.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 6","pages":"547-556"},"PeriodicalIF":2.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12100","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74311418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-09-14DOI: 10.1049/bme2.12095
Iurii Medvedev, João Tremoço, Beatriz Mano, Luís Espírito Santo, Nuno Gonçalves
{"title":"Towards understanding the character of quality sampling in deep learning face recognition","authors":"Iurii Medvedev, João Tremoço, Beatriz Mano, Luís Espírito Santo, Nuno Gonçalves","doi":"10.1049/bme2.12095","DOIUrl":"10.1049/bme2.12095","url":null,"abstract":"<p>Face recognition has become one of the most important modalities of biometrics in recent years. It widely utilises deep learning computer vision tools and adopts large collections of unconstrained face images of celebrities for training. Such choice of the data is related to its public availability when existing document compliant face image collections are hardly accessible due to security and privacy issues. Such inconsistency between the training data and deploy scenario may lead to a leak in performance in biometric systems, which are developed specifically for dealing with ID document compliant images. To mitigate this problem, we propose to regularise the training of the deep face recognition network with a specific sample mining strategy, which penalises the samples by their estimated quality. In addition to several considered quality metrics in recent work, we also expand our deep learning strategy to other sophisticated quality estimation methods and perform experiments to better understand the nature of quality sampling. Namely, we seek for the penalising manner (sampling character) that better satisfies the purpose of adapting deep learning face recognition for images of ID and travel documents. Extensive experiments demonstrate the efficiency of the approach for ID document compliant face images.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"498-511"},"PeriodicalIF":2.0,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12095","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83032888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-09-11DOI: 10.1049/bme2.12098
{"title":"The following article for this Special Issue was published in a different Issue","authors":"","doi":"10.1049/bme2.12098","DOIUrl":"10.1049/bme2.12098","url":null,"abstract":"<p>Christian Rathgeb, Daniel Fischer, Pawel Drozdowski, Christoph Busch. Reliable detection of doppelgängers based on deep face representations.</p><p>IET Biometrics 2022 May; 11(3):215–224. https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/bme2.12072</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"529"},"PeriodicalIF":2.0,"publicationDate":"2022-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12098","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77611670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-09-02DOI: 10.1049/bme2.12094
Biying Fu, Naser Damer
{"title":"Face morphing attacks and face image quality: The effect of morphing and the unsupervised attack detection by quality","authors":"Biying Fu, Naser Damer","doi":"10.1049/bme2.12094","DOIUrl":"https://doi.org/10.1049/bme2.12094","url":null,"abstract":"<p>Morphing attacks are a form of presentation attacks that gathered increasing attention in recent years. A morphed image can be successfully verified to multiple identities. This operation, therefore, poses serious security issues related to the ability of a travel or identity document to be verified to belong to multiple persons. Previous studies touched on the issue of the quality of morphing attack images, however, with the main goal of quantitatively proofing the realistic appearance of the produced morphing attacks. The authors theorise that the morphing processes might have an effect on both, the perceptual image quality and the image utility in face recognition (FR) when compared to bona fide samples. Towards investigating this theory, this work provides an extensive analysis of the effect of morphing on face image quality, including both general image quality measures and face image utility measures. This analysis is not limited to a single morphing technique but rather looks at six different morphing techniques and five different data sources using ten different quality measures. This analysis reveals consistent separability between the quality scores of morphing attack and bona fide samples measured by certain quality measures. The authors’ study goes further to build on this effect and investigate the possibility of performing unsupervised morphing attack detection (MAD) based on quality scores. The authors’ study looks into intra- and inter-dataset detectability to evaluate the generalisability of such a detection concept on different morphing techniques and bona fide sources. The authors’ final results point out that a set of quality measures, such as MagFace and CNNIQA, can be used to perform unsupervised and generalised MAD with a correct classification accuracy of over 70%.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"359-382"},"PeriodicalIF":2.0,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12094","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134878988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-08-30DOI: 10.1049/bme2.12090
Shoaib Meraj Sami, John McCauley, Sobhan Soleymani, Nasser Nasrabadi, Jeremy Dawson
{"title":"Benchmarking human face similarity using identical twins","authors":"Shoaib Meraj Sami, John McCauley, Sobhan Soleymani, Nasser Nasrabadi, Jeremy Dawson","doi":"10.1049/bme2.12090","DOIUrl":"https://doi.org/10.1049/bme2.12090","url":null,"abstract":"<p>The problem of distinguishing identical twins and non-twin look-alikes in automated facial recognition (FR) applications has become increasingly important with the widespread adoption of facial biometrics. Due to the high facial similarity of both identical twins and look-alikes, these face pairs represent the hardest cases presented to facial recognition tools. This work presents an application of one of the largest twin data sets compiled to date to address two FR challenges: (1) determining a baseline measure of facial similarity between identical twins and (2) applying this similarity measure to determine the impact of doppelgangers, or look-alikes, on FR performance for large face data sets. The facial similarity measure is determined via a deep convolutional neural network. This network is trained on a tailored verification task designed to encourage the network to group together highly similar face pairs in the embedding space and achieves a test AUC of 0.9799. The proposed network provides a quantitative similarity score for any two given faces and has been applied to large-scale face data sets to identify similar face pairs. An additional analysis that correlates the comparison score returned by a facial recognition tool and the similarity score returned by the proposed network has also been performed.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"459-484"},"PeriodicalIF":2.0,"publicationDate":"2022-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12090","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134880489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-08-27DOI: 10.1049/bme2.12092
Zhengquan Luo, Yunlong Wang, Nianfeng Liu, Zilei Wang
{"title":"Combining 2D texture and 3D geometry features for Reliable iris presentation attack detection using light field focal stack","authors":"Zhengquan Luo, Yunlong Wang, Nianfeng Liu, Zilei Wang","doi":"10.1049/bme2.12092","DOIUrl":"10.1049/bme2.12092","url":null,"abstract":"<p>Iris presentation attack detection (PAD) is still an unsolved problem mainly due to the various spoof attack strategies and poor generalisation on unseen attackers. In this paper, the merits of both light field (LF) imaging and deep learning (DL) are leveraged to combine 2D texture and 3D geometry features for iris liveness detection. By exploring off-the-shelf deep features of planar-oriented and sequence-oriented deep neural networks (DNNs) on the rendered focal stack, the proposed framework excavates the differences in 3D geometric structure and 2D spatial texture between bona fide and spoofing irises captured by LF cameras. A group of pre-trained DL models are adopted as feature extractor and the parameters of SVM classifiers are optimised on a limited number of samples. Moreover, two-branch feature fusion further strengthens the framework's robustness and reliability against severe motion blur, noise, and other degradation factors. The results of comparative experiments indicate that variants of the proposed framework significantly surpass the PAD methods that take 2D planar images or LF focal stack as input, even recent state-of-the-art (SOTA) methods fined-tuned on the adopted database. Presentation attacks, including printed papers, printed photos, and electronic displays, can be accurately detected without fine-tuning a bulky CNN. In addition, ablation studies validate the effectiveness of fusing geometric structure and spatial texture features. The results of multi-class attack detection experiments also verify the good generalisation ability of the proposed framework on unseen presentation attacks.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"420-429"},"PeriodicalIF":2.0,"publicationDate":"2022-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12092","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84433463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-08-25DOI: 10.48550/arXiv.2208.11822
S. Sami, John McCauley, Sobhan Soleymani, N. Nasrabadi, J. Dawson
{"title":"Benchmarking Human Face Similarity Using Identical Twins","authors":"S. Sami, John McCauley, Sobhan Soleymani, N. Nasrabadi, J. Dawson","doi":"10.48550/arXiv.2208.11822","DOIUrl":"https://doi.org/10.48550/arXiv.2208.11822","url":null,"abstract":"The problem of distinguishing identical twins and non-twin look-alikes in automated facial recognition (FR) applications has become increasingly important with the widespread adoption of facial biometrics. Due to the high facial similarity of both identical twins and look-alikes, these face pairs represent the hardest cases presented to facial recognition tools. This work presents an application of one of the largest twin datasets compiled to date to address two FR challenges: 1) determining a baseline measure of facial similarity between identical twins and 2) applying this similarity measure to determine the impact of doppelgangers, or look-alikes, on FR performance for large face datasets. The facial similarity measure is determined via a deep convolutional neural network. This network is trained on a tailored verification task designed to encourage the network to group together highly similar face pairs in the embedding space and achieves a test AUC of 0.9799. The proposed network provides a quantitative similarity score for any two given faces and has been applied to large-scale face datasets to identify similar face pairs. An additional analysis which correlates the comparison score returned by a facial recognition tool and the similarity score returned by the proposed network has also been performed.","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2 1","pages":"459-484"},"PeriodicalIF":2.0,"publicationDate":"2022-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82137888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2022-08-22DOI: 10.1049/bme2.12093
Anubha Parashar, Rajveer Singh Shekhawat
{"title":"Protection of gait data set for preserving its privacy in deep learning pipeline","authors":"Anubha Parashar, Rajveer Singh Shekhawat","doi":"10.1049/bme2.12093","DOIUrl":"10.1049/bme2.12093","url":null,"abstract":"<p>Human gait is a biometric that is being used in security systems because it is unique for each individual and helps recognise one from a distance without any intervention. To develop such a system, one needs a comprehensive data set specific to the application. If this data set somehow falls in the hands of rogue elements, they can easily access the secured system developed based on the data set. Thus, the protection of the gait data set becomes essential. It has been learnt that systems using deep learning are easily prone to hacking. Hence, maintaining the privacy of gait data sets in the deep learning pipeline becomes more difficult due to adversarial attacks or unauthorised access to the data set. One of the popular techniques for stopping access to the data set is using anonymisation. A reversible gait anonymisation pipeline that modifies gait geometry by morphing the images, that is, texture modifications, is proposed. Such modified data prevent hackers from making use of the data set for adversarial attacks. Nine layers were proposedto effect geometrical modifications, and a fixed gait texture template is used for morphing. Both these modify the gait data set so that any authentic person cannot be identified while maintaining the naturalness of the gait. The proposed method is evaluated using the similarity index as well as the recognition rate. The impact of various geometrical and texture modifications on silhouettes have been investigated to identify the modifications. The crowdsourcing and machine learning experiments were performed on the silhouette for this purpose. The obtained results in both types of experiments showed that texture modification has a stronger impact on the level of privacy protection than geometry shape modifications. In these experiments, the similarity index achieved is above 99%. These findings open new research directions regarding the adversarial attacks and privacy protection related to gait recognition data sets.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 6","pages":"557-569"},"PeriodicalIF":2.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12093","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87559499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}