{"title":"Discriminative Multiview Learning for Robust Palmprint Feature Representation and Recognition","authors":"Shuyi Li;Jianhang Zhou;Bob Zhang;Lifang Wu;Meng Jian","doi":"10.1109/TBIOM.2024.3401574","DOIUrl":null,"url":null,"abstract":"Binary-based feature representation methods have received increasing attention in palmprint recognition due to their high efficiency and great robustness to illumination variation. However, most of them are hand-designed descriptors that generally require much prior knowledge in their design. On the other hand, conventional single-view palmprint recognition approaches have difficulty in expressing the features of each sample strongly, especially low-quality palmprint images. To solve these problems, in this paper, we propose a novel discriminative multiview learning method, named Row-sparsity Binary Feature Learning-based Multiview (RsBFL_Mv) representation, for palmprint recognition. Specifically, given the training multiview data, RsBFL_Mv jointly learns multiple projection matrices that transform the informative multiview features into discriminative binary codes. Afterwards, the learned binary codes of each view are converted to the real-value map. Following this, we calculate the histograms of multiview feature maps and concatenate them for matching. For RsBFL_Mv, we enforce three criteria: 1) the quantization error between the projected real-valued features and the binary features of each view is minimized, at the same time, the projection error is minimized; 2) the salient label information for each view is utilized to minimize the distance of the within-class samples and simultaneously maximize the distance of the between-class samples; 3) the \n<inline-formula> <tex-math>$l_{2,1}$ </tex-math></inline-formula>\n norm is used to make the learned projection matrices to extract more representative features. Extensive experimental results on two publicly accessible palmprint datasets demonstrated the effectiveness of the proposed method in recognition accuracy and computational efficiency. Furthermore, additional experiments are conducted on two commonly used finger vein datasets that verified the powerful generalization capability of the proposed method.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 3","pages":"304-313"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10531042/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Binary-based feature representation methods have received increasing attention in palmprint recognition due to their high efficiency and great robustness to illumination variation. However, most of them are hand-designed descriptors that generally require much prior knowledge in their design. On the other hand, conventional single-view palmprint recognition approaches have difficulty in expressing the features of each sample strongly, especially low-quality palmprint images. To solve these problems, in this paper, we propose a novel discriminative multiview learning method, named Row-sparsity Binary Feature Learning-based Multiview (RsBFL_Mv) representation, for palmprint recognition. Specifically, given the training multiview data, RsBFL_Mv jointly learns multiple projection matrices that transform the informative multiview features into discriminative binary codes. Afterwards, the learned binary codes of each view are converted to the real-value map. Following this, we calculate the histograms of multiview feature maps and concatenate them for matching. For RsBFL_Mv, we enforce three criteria: 1) the quantization error between the projected real-valued features and the binary features of each view is minimized, at the same time, the projection error is minimized; 2) the salient label information for each view is utilized to minimize the distance of the within-class samples and simultaneously maximize the distance of the between-class samples; 3) the
$l_{2,1}$
norm is used to make the learned projection matrices to extract more representative features. Extensive experimental results on two publicly accessible palmprint datasets demonstrated the effectiveness of the proposed method in recognition accuracy and computational efficiency. Furthermore, additional experiments are conducted on two commonly used finger vein datasets that verified the powerful generalization capability of the proposed method.