{"title":"Bilateral Symmetry in Central Retinal Blood Vessels*","authors":"S. Biswas, Johan Rohdin, M. Drahanský","doi":"10.1109/IWBF49977.2020.9107969","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107969","url":null,"abstract":"Symmetry can be defined as uniformity, equivalence or exact similarity of two parts divided along an axis. While our left and right eyes clearly have a high degree of external bilateral symmetry, it is less obvious to what degree they have internal bilateral symmetry. This is especially true for central retinal blood vessels (CRBVs) which are responsible for supplying blood to retinas and also can be used as a strong biometric. In this paper, we study whether CRBVs of the left and right retinas possess strong enough bilateral symmetry so that we reliably tell whether a pair of CRBVs of the left and right retinas belongs to a single person. We evaluate and analyse the performance of both human and neural network based bilateral CRBVs verification. By experimenting on a large publicly available data set, we confirm that CRBVs have bilateral symmetry to some extent.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127041668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of a score-to-likelihood ratio model for facial recognition using authentic criminalistic data","authors":"A. Mölder, Isabelle Enlund Åström, E. Leitet","doi":"10.1109/IWBF49977.2020.9107954","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107954","url":null,"abstract":"Automated face matching systems have emerged as a useful tool for identification purposes in criminal investigations. In a forensic context it is desirable to evaluate the findings from such comparisons as probabilities in terms of a likelihood ratio. When comparing two biometric samples, many facial recognition systems produce a score value as the output. The score describes the relative similarity between the two facial images. To obtain the likelihood ratio, it is necessary to construct a statistical model for score-to-likelihood ratio conversion. The model is highly dependent on the available training data and ideally it should reflect the relevant population as closely as possible. In order to construct a general model applicable on a national level, we use data from a national mugshot database as training data. In a full crossmatch drawing from 51563 records, we develop and evaluate five different models in a Bayesian statistical framework using a total of 9000 facial comparisons with equal distribution between same source and different source scores.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128317531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards adept hand-crafted features for ocular biometrics","authors":"Ritesh Vyas","doi":"10.1109/IWBF49977.2020.9107952","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107952","url":null,"abstract":"This article presents a hand-crafted feature descriptor for ocular recognition, which as opposed to the deep-learning based approaches, is free from any kind of learning. The proposed approach is able to mitigate the limitations of iris recognition, such as poor iris segmentation, partial or covered iris. The proposed approach leverages by the unique texture present in the periocular region, which can provide complementary details along with the iris modality, or can act as a potential stand alone trait. The proposed descriptor is evaluated on three benchmark databases, namely VISOB, CrossEyed and MICHE. Two of these databases (VISOB and MICHE) provide eye images captured through the smartphones, whereas the third database provides standard eye images registered in visible as well as near-infrared wavelengths. Hence, the evaluation reported in this article becomes a comprehensive one. The experimental results exhibit that the proposed approach proves to be suitable in challenging evaluation frameworks.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123122055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Doctoral Consortium Proposal: : Biometrics as forensic evidence: some reflections from the Italian Criminal proceeding’s point of view","authors":"Ernestina Sacchetto","doi":"10.1109/IWBF49977.2020.9107948","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107948","url":null,"abstract":"Biometric science is not free of errors, especially considering its typical statistical – probabilistic nature. The systematic use of biometric technologies in the Italian criminal proceeding could lead to some issues in terms of the reliability of the results from their application and the compatibility between the discipline in question, constitutional principles and typical procedural guarantees. However, because of the speed of the recent technology development, the criminal proceeding seems not to be able to do without the contribution offered by biometric science and the reason is that both the science and the trial, even with different approaches, have the different objective of the reconstruction of the causal connection. Biometric data have to be analyzed and evaluated in terms of scientific accuracy and it is necessary to understand their value case by case. The most crucial risks for forensic science in its recent digital dimension need to have legislative clarification aimed at standardising the discipline, also in terms of interpretability of technology, distinguishing one biometric data from another, implementing different applications since the specificity of each data element requires different legal solutions. Otherwise, the danger is that the procedural guarantees could be weakened (such as the principle of contradictory, the principle of reasonable duration of trial and the right of defence) in addition to some issues related to individual guarantees such as the right to privacy.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127278046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huiqin Chen, Emanuel Aldea, S. L. Hégarat-Mascle, V. Despiegel
{"title":"Use of Scene Geometry Priors for Data Association in Egocentric Views","authors":"Huiqin Chen, Emanuel Aldea, S. L. Hégarat-Mascle, V. Despiegel","doi":"10.1109/IWBF49977.2020.9107955","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107955","url":null,"abstract":"The joint use of dynamic, egocentric view cameras and of traditional overview surveillance cameras in high-risk contexts has become a promising avenue for advancing public safety and security applications, as it provides more accurate localization and finer analysis of individual interactions. However, the strong scene scale changes, occlusions and appearance variations make the egocentric data association more difficult than the standard across-views data association. To address this issue, we propose to use two independent geometric priors and integrate them with the classic appearance cues into the objection function of data association algorithm. Our results show that the proposed method achieves significant improvement in terms of the association accuracy. We highlight the attractive use of geometric priors in across-views data association and its potential for supporting pedestrian tracking in this context.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125413252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Venkatesh, Haoyu Zhang, Raghavendra Ramachandra, K. Raja, N. Damer, C. Busch
{"title":"Can GAN Generated Morphs Threaten Face Recognition Systems Equally as Landmark Based Morphs? - Vulnerability and Detection","authors":"S. Venkatesh, Haoyu Zhang, Raghavendra Ramachandra, K. Raja, N. Damer, C. Busch","doi":"10.1109/IWBF49977.2020.9107970","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107970","url":null,"abstract":"The primary objective of face morphing is to com-bine face images of different data subjects (e.g. an malicious actor and an accomplice) to generate a face image that can be equally verified for both contributing data subjects. In this paper, we propose a new framework for generating face morphs using a newer Generative Adversarial Network (GAN) - StyleGAN. In contrast to earlier works, we generate realistic morphs of both high-quality and high resolution of 1024 × 1024 pixels. With the newly created morphing dataset of 2500 morphed face images, we pose a critical question in this work. (i) Can GAN generated morphs threaten Face Recognition Systems (FRS) equally as Landmark based morphs? Seeking an answer, we benchmark the vulnerability of a Commercial-Off-The-Shelf FRS (COTS) and a deep learning-based FRS (ArcFace). This work also benchmarks the detection approaches for both GAN generated morphs against the landmark based morphs using established Morphing Attack Detection (MAD) schemes.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128006483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Method for Curation of Web-Scraped Face Image Datasets","authors":"Kai Zhang, Vítor Albiero, K. Bowyer","doi":"10.1109/IWBF49977.2020.9107950","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107950","url":null,"abstract":"Web-scraped, in-the-wild datasets have become the norm in face recognition research. The numbers of subjects and images acquired in web-scraped datasets are usually very large, with number of images on the millions scale. A variety of issues occur when collecting a dataset in-the-wild, including images with the wrong identity label, duplicate images, duplicate subjects and variation in quality. With the number of images being in the millions, a manual cleaning procedure is not feasible. But fully automated methods used to date result in a less-than-ideal level of clean dataset. We propose a semi-automated method, where the goal is to have a clean dataset for testing face recognition methods, with similar quality across men and women, to support comparison of accuracy across gender. Our approach removes near-duplicate images, merges duplicate subjects, corrects mislabeled images, and removes images outside a defined range of pose and quality. We conduct the curation on the Asian Face Dataset (AFD) and VGGFace2 test dataset. The experiments show that a state-of-the-art method achieves a much higher accuracy on the datasets after they are curated. Finally, we release our cleaned versions of both datasets to the research community.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131242236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aashish Rai, Vishal M. Chudasama, Kishor P. Upla, K. Raja, Raghavendra Ramachandra, C. Busch
{"title":"ComSupResNet: A Compact Super-Resolution Network for Low-Resolution Face Images","authors":"Aashish Rai, Vishal M. Chudasama, Kishor P. Upla, K. Raja, Raghavendra Ramachandra, C. Busch","doi":"10.1109/IWBF49977.2020.9107946","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107946","url":null,"abstract":"Typically in face recognition based applications, a certain degree of resolution is required for effective feature extraction and comparison. Many practical systems such as surveillance applications violate the requirement by capturing Low-Resolution (LR) face images due to wider angle of imaging or longer stand-off to the camera. Such gap in the requirement versus the practice has lead numerous works to investigate approaches to super-resolve the face images that span from classical dictionary based methods to recent deep learning based approaches. In this work, we propose a compact and computationally efficient Convolutional Neural Network (CNN) to increase the spatial resolution of a LR face image to obtain a High-Resolution (HR) face image with an upscaling factor of up to ×8 which we refer as ComSupResNet. Contrary to other earlier works, the proposed architecture in the compact network comprises of a progressive residual propagating asym-metrical architecture with three modules: low-frequency and high-frequency feature extraction modules and reconstruction module. In addition to designing a new architecture, we also exercise care to reduce the number of parameters to approx. 1 M as compared to similar earlier work which has more than 30 M parameters. As a second aspect, we present the generalization aspect for the learned network in a cross-database setting by training the network on CelebA dataset while evaluating it both on CelebA and LFW datasets. Through empirical evaluations, we demonstrate the gain in high fidelity reconstruction in terms of structural similarity and Peak-Signal-to-Noise Ratio (PSNR) despite the compactness of the model.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121562215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan Kosarevych, Marian Petruk, Markian Kostiv, Orest Kupyn, M. Maksymenko, Volodymyr Budzan
{"title":"ActGAN: Flexible and Efficient One-shot Face Reenactment","authors":"Ivan Kosarevych, Marian Petruk, Markian Kostiv, Orest Kupyn, M. Maksymenko, Volodymyr Budzan","doi":"10.1109/IWBF49977.2020.9107944","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107944","url":null,"abstract":"This paper introduces ActGAN – a novel end-to-end generative adversarial network (GAN) for one-shot face reenactment. Given two images, the goal is to transfer the facial expression of the source actor onto a target person in a photo-realistic fashion. While existing methods require target identity to be predefined, we address this problem by introducing a \"many-to-many\" approach, which allows arbitrary persons both for source and target without additional retraining. To this end, we employ the Feature Pyramid Network (FPN) as a core generator building block – the first application of FPN in face reenactment, producing finer results. We also introduce a solution to preserve a person’s identity between synthesized and target person by adopting the state-of-the-art approach in deep face recognition domain. The architecture readily supports reenactment in different scenarios: \"many-to-many\", \"one-to-one\", \"one-to-another\" in terms of expression accuracy, identity preservation, and overall image quality. We demonstrate that ActGAN achieves competitive performance against recent works concerning visual quality.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133830871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Which Ear Regions Contribute to Identification and to Gender Classification?","authors":"Di Meng, S. Mahmoodi, M. Nixon","doi":"10.1109/IWBF49977.2020.9107963","DOIUrl":"https://doi.org/10.1109/IWBF49977.2020.9107963","url":null,"abstract":"Previous studies in biometrics have shown how gender can be determined from images of ears for recognition, but without specificity. In this paper, we use model-based analysis and deep learning methods for gender classification from ear images. We use these methods to determine the differences between female and male ears. We confirm the identification performance and then the gender discrimination before analyzing which ear parts contribute most to performance. To this end, we compare the heatmaps of different genders with identification heatmaps. It appears from the heatmaps that ears encode females and males differently and we show how this can lead to successful gender discrimination and to increase insight into the process of identification of people by their ears. This could lead to gender identification in surveillance imagery, even when the face is concealed and provides a potential focus for future gender research.","PeriodicalId":174654,"journal":{"name":"2020 8th International Workshop on Biometrics and Forensics (IWBF)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123374764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}