利用二维局部斑块虚拟样本生成技术识别单样本人脸和耳朵

Vivek Tomar, Nitin Kumar
{"title":"利用二维局部斑块虚拟样本生成技术识别单样本人脸和耳朵","authors":"Vivek Tomar, Nitin Kumar","doi":"10.1007/s11227-024-06463-5","DOIUrl":null,"url":null,"abstract":"<p>Single-sample face and ear recognition (SSFER) is a challenging sub-problem in biometric recognition that refers to the difficulty in feature extraction and classification when only a single-face or ear training image is available. SSFER becomes much more challenging when images contain a variety of lighting, positions, occlusions, expressions, etc. Virtual sample generation methods in SSFER have gained popularity among researchers due to their simplicity in the augmentation of training sets and improved feature extraction. In this article, we propose a novel and simple method for the generation of virtual samples for training the classifiers to be used in SSFER. The proposed method is based on 2D local patches, and six training samples are generated for a single face or ear image. Further, training is performed using one of the variations along with its generated virtual samples, while during testing, all the variations were considered except the one used during training. Features are extracted using principal component analysis, and classification is performed using the nearest-neighbour classifier. Extensive experiments were performed for the image quality of the virtual samples, classification accuracy, and testing time on ORL, Yale, and AR (illumination) face databases, and AMI and IITD ear databases which are publicly available. The results are also compared with other state-of-the-art methods, with classification accuracy and universal image quality being the major outcomes. The proposed method improves the classification accuracy by 14.50%, 1.11%, 0.09%, 21.60%, and 10.00% on AR (illumination), Yale, ORL, IITD, and AMI databases, respectively. The proposed method showed an improvement in universal image quality by 15%, 20%, 14%, 30%, and 15% on AR (illumination), Yale, ORL, IITD, and AMI databases, respectively. Experimental results prove the effectiveness of the proposed method in generating virtual samples for SSFER.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Single-sample face and ear recognition using virtual sample generation with 2D local patches\",\"authors\":\"Vivek Tomar, Nitin Kumar\",\"doi\":\"10.1007/s11227-024-06463-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Single-sample face and ear recognition (SSFER) is a challenging sub-problem in biometric recognition that refers to the difficulty in feature extraction and classification when only a single-face or ear training image is available. SSFER becomes much more challenging when images contain a variety of lighting, positions, occlusions, expressions, etc. Virtual sample generation methods in SSFER have gained popularity among researchers due to their simplicity in the augmentation of training sets and improved feature extraction. In this article, we propose a novel and simple method for the generation of virtual samples for training the classifiers to be used in SSFER. The proposed method is based on 2D local patches, and six training samples are generated for a single face or ear image. Further, training is performed using one of the variations along with its generated virtual samples, while during testing, all the variations were considered except the one used during training. Features are extracted using principal component analysis, and classification is performed using the nearest-neighbour classifier. Extensive experiments were performed for the image quality of the virtual samples, classification accuracy, and testing time on ORL, Yale, and AR (illumination) face databases, and AMI and IITD ear databases which are publicly available. The results are also compared with other state-of-the-art methods, with classification accuracy and universal image quality being the major outcomes. The proposed method improves the classification accuracy by 14.50%, 1.11%, 0.09%, 21.60%, and 10.00% on AR (illumination), Yale, ORL, IITD, and AMI databases, respectively. The proposed method showed an improvement in universal image quality by 15%, 20%, 14%, 30%, and 15% on AR (illumination), Yale, ORL, IITD, and AMI databases, respectively. Experimental results prove the effectiveness of the proposed method in generating virtual samples for SSFER.</p>\",\"PeriodicalId\":501596,\"journal\":{\"name\":\"The Journal of Supercomputing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Supercomputing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s11227-024-06463-5\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Supercomputing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11227-024-06463-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

单样本人脸和耳朵识别(SSFER)是生物识别中一个具有挑战性的子问题,指的是在只有单张人脸或耳朵训练图像的情况下,特征提取和分类的难度。当图像包含各种光照、位置、遮挡、表情等时,SSFER 的挑战性就更大了。SSFER 中的虚拟样本生成方法因其在增强训练集和改进特征提取方面的简便性而受到研究人员的青睐。在本文中,我们提出了一种新颖而简单的虚拟样本生成方法,用于训练 SSFER 中使用的分类器。所提方法基于二维局部斑块,为单个面部或耳朵图像生成六个训练样本。然后,使用其中一个变体及其生成的虚拟样本进行训练,而在测试过程中,除了训练时使用的变体外,所有变体都被考虑在内。使用主成分分析提取特征,并使用最近邻分类器进行分类。在 ORL、Yale 和 AR(光照)人脸数据库以及 AMI 和 IITD 耳朵数据库上,对虚拟样本的图像质量、分类准确性和测试时间进行了广泛的实验。结果还与其他最先进的方法进行了比较,分类准确率和通用图像质量是主要结果。在 AR(光照)、Yale、ORL、IITD 和 AMI 数据库上,所提方法的分类准确率分别提高了 14.50%、1.11%、0.09%、21.60% 和 10.00%。在 AR(光照)、Yale、ORL、IITD 和 AMI 数据库中,所提方法的通用图像质量分别提高了 15%、20%、14%、30% 和 15%。实验结果证明了所提方法在为 SSFER 生成虚拟样本方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Single-sample face and ear recognition using virtual sample generation with 2D local patches

Single-sample face and ear recognition using virtual sample generation with 2D local patches

Single-sample face and ear recognition (SSFER) is a challenging sub-problem in biometric recognition that refers to the difficulty in feature extraction and classification when only a single-face or ear training image is available. SSFER becomes much more challenging when images contain a variety of lighting, positions, occlusions, expressions, etc. Virtual sample generation methods in SSFER have gained popularity among researchers due to their simplicity in the augmentation of training sets and improved feature extraction. In this article, we propose a novel and simple method for the generation of virtual samples for training the classifiers to be used in SSFER. The proposed method is based on 2D local patches, and six training samples are generated for a single face or ear image. Further, training is performed using one of the variations along with its generated virtual samples, while during testing, all the variations were considered except the one used during training. Features are extracted using principal component analysis, and classification is performed using the nearest-neighbour classifier. Extensive experiments were performed for the image quality of the virtual samples, classification accuracy, and testing time on ORL, Yale, and AR (illumination) face databases, and AMI and IITD ear databases which are publicly available. The results are also compared with other state-of-the-art methods, with classification accuracy and universal image quality being the major outcomes. The proposed method improves the classification accuracy by 14.50%, 1.11%, 0.09%, 21.60%, and 10.00% on AR (illumination), Yale, ORL, IITD, and AMI databases, respectively. The proposed method showed an improvement in universal image quality by 15%, 20%, 14%, 30%, and 15% on AR (illumination), Yale, ORL, IITD, and AMI databases, respectively. Experimental results prove the effectiveness of the proposed method in generating virtual samples for SSFER.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信