Amina Bassit;Florian F. W. Hahn;Raymond N. J. Veldhuis;Andreas Peter
{"title":"Improved Multiplication-Free Biometric Recognition Under Encryption","authors":"Amina Bassit;Florian F. W. Hahn;Raymond N. J. Veldhuis;Andreas Peter","doi":"10.1109/TBIOM.2023.3340306","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3340306","url":null,"abstract":"Modern biometric recognition systems extract distinctive feature vectors of biometric samples using deep neural networks to measure the amount of (dis-)similarity between two biometric samples. Studies have shown that personal information (e.g., health condition, ethnicity, etc.) can be inferred, and biometric samples can be reconstructed from those feature vectors, making their protection an urgent necessity. State-of-the-art biometrics protection solutions are based on homomorphic encryption (HE) to perform recognition over encrypted feature vectors, hiding the features and their processing while releasing the outcome only. However, this comes at the cost of those solutions’ efficiency due to the inefficiency of HE-based solutions with a large number of multiplications; for (dis-)similarity measures, this number is proportional to the vector’s dimension. In this paper, we tackle the HE performance bottleneck by freeing the two common (dis-)similarity measures, the cosine similarity and the squared Euclidean distance, from multiplications. Assuming normalized feature vectors, our approach pre-computes and organizes those (dis-)similarity measures into lookup tables. This transforms their computation into simple table lookups and summations only. We integrate the table lookup with HE and introduce pseudo-random permutations to enable cheap plaintext slot selection, which significantly saves the recognition runtime and brings a positive impact on the recognition performance. We then assess their runtime efficiency under encryption and record runtimes between 16.74ms and 49.84ms for both the cleartext and encrypted decision modes over the three security levels, demonstrating their enhanced speed for a compact encrypted reference template reduced to one ciphertext.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"314-325"},"PeriodicalIF":0.0,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10347446","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141725584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Considerations on the Evaluation of Biometric Quality Assessment Algorithms","authors":"Torsten Schlett;Christian Rathgeb;Juan Tapia;Christoph Busch","doi":"10.1109/TBIOM.2023.3336513","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3336513","url":null,"abstract":"Quality assessment algorithms can be used to estimate the utility of a biometric sample for the purpose of biometric recognition. “Error versus Discard Characteristic” (EDC) plots, and “partial Area Under Curve” (pAUC) values of curves therein, are generally used by researchers to evaluate the predictive performance of such quality assessment algorithms. An EDC curve depends on an error type such as the “False Non Match Rate” (FNMR), a quality assessment algorithm, a biometric recognition system, a set of comparisons each corresponding to a biometric sample pair, and a comparison score threshold corresponding to a starting error. To compute an EDC curve, comparisons are progressively discarded based on the associated samples’ lowest quality scores, and the error is computed for the remaining comparisons. Additionally, a discard fraction limit or range must be selected to compute pAUC values, which can then be used to quantitatively rank quality assessment algorithms. This paper discusses and analyses various details for this kind of quality assessment algorithm evaluation, including general EDC properties, interpretability improvements for pAUC values based on a hard lower error limit and a soft upper error limit, the use of relative instead of discrete rankings, stepwise vs. linear curve interpolation, and normalisation of quality scores to a [0, 100] integer range. We also analyse the stability of quantitative quality assessment algorithm rankings based on pAUC values across varying pAUC discard fraction limits and starting errors, concluding that higher pAUC discard fraction limits should be preferred. The analyses are conducted both with synthetic data and with real face image and fingerprint quality assessment data, with a focus on general modality-independent conclusions for EDC evaluations. Various EDC alternatives are discussed as well. Open source evaluation software is provided at \u0000<uri>https://github.com/dasec/quality-assessment-evaluation</uri>\u0000. Will be made available upon acceptance.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"54-67"},"PeriodicalIF":0.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10330743","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140063642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Li;Shitong Shao;Ziming Qiu;Zhihao Zhu;Aiguo Song
{"title":"2D-SNet: A Lightweight Network for Person Re-Identification on the Small Data Regime","authors":"Wei Li;Shitong Shao;Ziming Qiu;Zhihao Zhu;Aiguo Song","doi":"10.1109/TBIOM.2023.3332285","DOIUrl":"10.1109/TBIOM.2023.3332285","url":null,"abstract":"Currently, researchers incline to employ large-scale datasets as benchmarks for pre-training and fine-tuning models on small-scale datasets to achieve superior performance. However, many researchers cannot afford the enormous computational overhead that pre-training entails, and fine-tuning is easy to compromise the generalization ability of models for the target dataset. Therefore, model learning on the small challenging data regime should be given renewed attention, which will benefit many tasks such as person re-identification. To this end, we propose a novel model named “Two-Dimensional Serpentine Network (2D-SNet)”, which is constructed by multiple lightweight and effective “Two-Dimensional Serpentine Blocks (2D-SBlocks)”. The generalization ability of 2D-SNet stems from three points: (a) 2D-SBlock utilizes multi-scale convolution kernels to extract the multi-scale information from images on the small data regime; (b) 2D-SBlock has a serpentine calculation order, which significantly reduces the number of skip connections and can thereby save many computational and storage resources; (c) 2D-SBlock improves the discrimination ability of 2D-SNet via BN-Depthwise Conv or MSA. As experimentally demonstrated, our proposed 2D-SNet has superiority outstrips closely-related advanced approaches for person re-identification on datasets Market-1501 and CUHK03.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"68-78"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135612246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Daugman;Cathryn Downing;Oluwatobi Noah Akande;Oluwakemi Christiana Abikoye
{"title":"Ethnicity and Biometric Uniqueness: Iris Pattern Individuality in a West African Database","authors":"John Daugman;Cathryn Downing;Oluwatobi Noah Akande;Oluwakemi Christiana Abikoye","doi":"10.1109/TBIOM.2023.3327287","DOIUrl":"10.1109/TBIOM.2023.3327287","url":null,"abstract":"We conducted more than 1.3 million comparisons of iris patterns encoded from images collected at two Nigerian universities, which constitute the newly available African Human Iris (AFHIRIS) database. The purpose was to discover whether ethnic differences in iris structure and appearance such as the textural feature size, as contrasted with an all-Chinese image database or an American database in which only 1.53% were of African-American heritage, made a material difference for iris discrimination. We measured a reduction in entropy for the AFHIRIS database due to the coarser iris features created by the thick anterior layer of melanocytes, and we found stochastic parameters that accurately model the relevant empirical distributions. Quantile-Quantile analysis revealed that a very small change in operational decision thresholds for the African database would compensate for the reduced entropy and generate the same performance in terms of resistance to False Matches. We conclude that despite demographic difference, individuality can be robustly discerned by comparison of iris patterns in this West African population.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"79-86"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134981086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cancelable Face Recognition Using Deep Steganography","authors":"Koichi Ito;Takashi Kozu;Hiroya Kawai;Goki Hanawa;Takafumi Aoki","doi":"10.1109/TBIOM.2023.3327694","DOIUrl":"10.1109/TBIOM.2023.3327694","url":null,"abstract":"In biometrics, the secure transfer and storage of biometric samples are important for protecting the privacy and security of the data subject. One of the methods for authentication while protecting biometric samples is cancelable biometrics, which performs transformation of features and uses the transformed features for authentication. Among the methods of cancelable biometrics, steganography-based approaches have been proposed, in which secret information is embedded in another to hide its existence. In this paper, we propose cancelable biometrics based on deep steganography for face recognition. We embed a face image or its face features into a cover image to generate a stego image with the same appearance as the cover image. By using a dedicated face feature extractor, we can perform face recognition without restoring the embedded face image or face features from the stego image. We demonstrate the effectiveness of the proposed method compared to conventional steganography-based methods through performance and security evaluation experiments using public face image datasets. In addition, we present one of the potential applications of the proposed method to improve the security of face recognition by using a QR code with a one-time password for the cover image.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"87-102"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10296007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134980588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3-D Face Morphing Attacks: Generation, Vulnerability and Detection","authors":"Jag Mohan Singh;Raghavendra Ramachandra","doi":"10.1109/TBIOM.2023.3324684","DOIUrl":"10.1109/TBIOM.2023.3324684","url":null,"abstract":"Face Recognition systems (FRS) have been found to be vulnerable to morphing attacks, where the morphed face image is generated by blending the face images from contributory data subjects. This work presents a novel direction for generating face-morphing attacks in 3D. To this extent, we introduced a novel approach based on blending 3D face point clouds corresponding to contributory data subjects. The proposed method generates 3D face morphing by projecting the input 3D face point clouds onto depth maps and 2D color images, followed by image blending and wrapping operations performed independently on the color images and depth maps. We then back-projected the 2D morphing color map and the depth map to the point cloud using the canonical (fixed) view. Given that the generated 3D face morphing models will result in holes owing to a single canonical view, we have proposed a new algorithm for hole filling that will result in a high-quality 3D face morphing model. Extensive experiments were conducted on the newly generated 3D face dataset comprising 675 3D scans corresponding to 41 unique data subjects and a publicly available database (Facescape) with 100 data subjects. Experiments were performed to benchmark the vulnerability of the proposed 3D morph-generation scheme against automatic 2D, 3D FRS, and human observer analysis. We also presented a quantitative assessment of the quality of the generated 3D face-morphing models using eight different quality metrics. Finally, we propose three different 3D face Morphing Attack Detection (3D-MAD) algorithms to benchmark the performance of 3D face morphing attack detection techniques.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"103-117"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10286232","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136374230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2023 Index IEEE Transactions on Biometrics, Behavior, and Identity Science Vol. 5","authors":"","doi":"10.1109/TBIOM.2023.3323413","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3323413","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"606-615"},"PeriodicalIF":0.0,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10278523.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Publication Information","authors":"","doi":"10.1109/TBIOM.2023.3311478","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3311478","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10273760.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49989201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Biometrics, Behavior, and Identity Science Information for Authors","authors":"","doi":"10.1109/TBIOM.2023.3311406","DOIUrl":"https://doi.org/10.1109/TBIOM.2023.3311406","url":null,"abstract":"","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"5 4","pages":"C3-C3"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8423754/10273758/10273704.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49963983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thrupthi Ann John;Vineeth N. Balasubramanian;C. V. Jawahar
{"title":"Explaining Deep Face Algorithms Through Visualization: A Survey","authors":"Thrupthi Ann John;Vineeth N. Balasubramanian;C. V. Jawahar","doi":"10.1109/TBIOM.2023.3319837","DOIUrl":"10.1109/TBIOM.2023.3319837","url":null,"abstract":"Although current deep models for face tasks surpass human performance on some benchmarks, we do not understand how they work. Thus, we cannot predict how it will react to novel inputs, resulting in catastrophic failures and unwanted biases in the algorithms. Explainable AI helps bridge the gap, but currently, there are very few visualization algorithms designed for faces. This work undertakes a first-of-its-kind meta-analysis of explainability algorithms in the face domain. We explore the nuances and caveats of adapting general-purpose visualization algorithms to the face domain, illustrated by computing visualizations on popular face models. We review existing face explainability works and reveal valuable insights into the structure and hierarchy of face networks. We also determine the design considerations for practical face visualizations accessible to AI practitioners by conducting a user study on the utility of various explainability algorithms.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 1","pages":"15-29"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135794552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}