IET Biometrics最新文献

筛选
英文 中文
Profile to frontal face recognition in the wild using coupled conditional generative adversarial network 利用耦合条件生成对抗网络对野外侧面人脸进行识别
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-03-10 DOI: 10.1049/bme2.12069
Fariborz Taherkhani, Veeru Talreja, Jeremy Dawson, Matthew C. Valenti, Nasser M. Nasrabadi
{"title":"Profile to frontal face recognition in the wild using coupled conditional generative adversarial network","authors":"Fariborz Taherkhani,&nbsp;Veeru Talreja,&nbsp;Jeremy Dawson,&nbsp;Matthew C. Valenti,&nbsp;Nasser M. Nasrabadi","doi":"10.1049/bme2.12069","DOIUrl":"https://doi.org/10.1049/bme2.12069","url":null,"abstract":"<p>In recent years, with the advent of deep-learning, face recognition (FR) has achieved exceptional success. However, many of these deep FR models perform much better in handling frontal faces compared to profile faces. The major reason for poor performance in handling of profile faces is that it is inherently difficult to learn pose-invariant deep representations that are useful for profile FR. In this paper, the authors hypothesise that the profile face domain possesses a latent connection with the frontal face domain in a latent feature subspace. The authors look to exploit this latent connection by projecting the profile faces and frontal faces into a common latent subspace and perform verification or retrieval in the latent domain. A coupled conditional generative adversarial network (cpGAN) structure is leveraged to find the hidden relationship between the profile and frontal images in a latent common embedding subspace. Specifically, the cpGAN framework consists of two conditional GAN-based sub-networks, one dedicated to the frontal domain and the other dedicated to the profile domain. Each sub-network tends to find a projection that maximises the pair-wise correlation between the two feature domains in a common embedding feature subspace. The efficacy of the authors’ approach compared with the state of the art is demonstrated using the CFP, CMU Multi-PIE, IARPA Janus Benchmark A, and IARPA Janus Benchmark C datasets. Additionally, the authors have also implemented a coupled convolutional neural network (cpCNN) and an adversarial discriminative domain adaptation network (ADDA) for profile to frontal FR. The authors have evaluated the performance of cpCNN and ADDA and compared it with the proposed cpGAN. Finally, the authors have also evaluated the authors’ cpGAN for reconstruction of frontal faces from input profile faces contained in the VGGFace2 dataset.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"260-276"},"PeriodicalIF":2.0,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91822997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Profile to frontal face recognition in the wild using coupled conditional generative adversarial network 利用耦合条件生成对抗网络对野外侧面人脸进行识别
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-03-10 DOI: 10.1049/bme2.12069
Fariborz Taherkhani, Veeru Talreja, J. Dawson, M. Valenti, N. Nasrabadi
{"title":"Profile to frontal face recognition in the wild using coupled conditional generative adversarial network","authors":"Fariborz Taherkhani, Veeru Talreja, J. Dawson, M. Valenti, N. Nasrabadi","doi":"10.1049/bme2.12069","DOIUrl":"https://doi.org/10.1049/bme2.12069","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"423 1","pages":"260-276"},"PeriodicalIF":2.0,"publicationDate":"2022-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77005470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Recognition of the finger vascular system using multi-wavelength imaging 多波长成像识别手指血管系统
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-03-05 DOI: 10.1049/bme2.12068
Tomasz Moroń, Krzysztof Bernacki, Jerzy Fiołka, Jia Peng, Adam Popowicz
{"title":"Recognition of the finger vascular system using multi-wavelength imaging","authors":"Tomasz Moroń,&nbsp;Krzysztof Bernacki,&nbsp;Jerzy Fiołka,&nbsp;Jia Peng,&nbsp;Adam Popowicz","doi":"10.1049/bme2.12068","DOIUrl":"https://doi.org/10.1049/bme2.12068","url":null,"abstract":"<p>There has recently been intensive development of methods for identification and personal verification using the human finger vascular system (FVS). The primary focus of these efforts has been the increasingly sophisticated methods of image processing, and frequently employing machine learning. In this article, we present a new concept of imaging in which the finger vasculature is illuminated using different wavelengths of light, generating multiple FVS images. We hypothesised that the analysis of these image sets, instead of individual images, could increase the effectiveness of identification. Analyses of data from over 100 volunteers, using five different deterministic methods for feature extraction, consistently demonstrated improved identification efficiency with the addition of data obtained from another wavelength. The best results were seen for combinations of diodes between 800 and 900 nm. Finger vascular system observations outside this range were of marginal utility. The knowledge gained from this experiment can be utilised by designers of biometric recognition devices leveraging FVS technology. Our results confirm that developments in this field are not restricted to image processing algorithms, and that hardware innovations remain relevant.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"249-259"},"PeriodicalIF":2.0,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12068","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91803974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognition of the finger vascular system using multi-wavelength imaging 多波长成像识别手指血管系统
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-03-05 DOI: 10.1049/bme2.12068
Tomasz Moron, K. Bernacki, J. Fiolka, Jia Peng, A. Popowicz
{"title":"Recognition of the finger vascular system using multi-wavelength imaging","authors":"Tomasz Moron, K. Bernacki, J. Fiolka, Jia Peng, A. Popowicz","doi":"10.1049/bme2.12068","DOIUrl":"https://doi.org/10.1049/bme2.12068","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"71 1","pages":"249-259"},"PeriodicalIF":2.0,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90390516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Corresponding keypoint constrained sparse representation three-dimensional ear recognition via one sample per person 对应的关键点约束稀疏表示三维人耳识别,每个人一个样本
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-03-02 DOI: 10.1049/bme2.12067
Qinping Zhu, Zhichun Mu, Li Yuan
{"title":"Corresponding keypoint constrained sparse representation three-dimensional ear recognition via one sample per person","authors":"Qinping Zhu,&nbsp;Zhichun Mu,&nbsp;Li Yuan","doi":"10.1049/bme2.12067","DOIUrl":"https://doi.org/10.1049/bme2.12067","url":null,"abstract":"<p>When only one sample per person (OSPP) is registered in the gallery, it is difficult for ear recognition methods to sufficiently and effectively reduce the search range of the matching features, thus resulting in low computational efficiency and mismatch problems. A 3D ear biometric system using OSPP is proposed to solve this problem. By categorising ear images by shape and establishing the corresponding relationship between keypoints from ear images and regions (regional cluster) on the directional proposals that can be arranged to roughly face the ear image, the corresponding keypoints are obtained. Then, ear recognition is performed by combining corresponding keypoints and a multi-keypoint descriptor sparse representation classification method. The experimental results conducted on the University of Notre Dame Collection J2 dataset yielded a rank-1 recognition rate of 98.84%; furthermore, the time for one identification operation shared by each gallery subject was 0.047 ms.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"225-248"},"PeriodicalIF":2.0,"publicationDate":"2022-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12067","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91794343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Corresponding keypoint constrained sparse representation three-dimensional ear recognition via one sample per person 对应的关键点约束稀疏表示三维人耳识别,每个人一个样本
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-03-02 DOI: 10.1049/bme2.12067
Qi Zhu, Zhichun Mu, Li Yuan
{"title":"Corresponding keypoint constrained sparse representation three-dimensional ear recognition via one sample per person","authors":"Qi Zhu, Zhichun Mu, Li Yuan","doi":"10.1049/bme2.12067","DOIUrl":"https://doi.org/10.1049/bme2.12067","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"32 1","pages":"225-248"},"PeriodicalIF":2.0,"publicationDate":"2022-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81088837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Proposing a Fuzzy Soft-max-based classifier in a hybrid deep learning architecture for human activity recognition 在混合深度学习架构中提出一种基于模糊soft -max的分类器,用于人类活动识别
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-02-06 DOI: 10.1049/bme2.12066
Reza Shakerian, Meisam Yadollahzadeh-Tabari, Seyed Yaser Bozorgi Rad
{"title":"Proposing a Fuzzy Soft-max-based classifier in a hybrid deep learning architecture for human activity recognition","authors":"Reza Shakerian,&nbsp;Meisam Yadollahzadeh-Tabari,&nbsp;Seyed Yaser Bozorgi Rad","doi":"10.1049/bme2.12066","DOIUrl":"10.1049/bme2.12066","url":null,"abstract":"<p>Human Activity Recognition (HAR) is the process of identifying and analysing activities performed by a person (or persons). This paper proposes an efficient HAR system based on wearable sensors that uses deep learning techniques. The proposed HAR takes the advantage of staking Convolutional Neural Network and Long Short-Term (LSTM), for extracting the high-level features of the sensors data and for learning the time-series behaviour of the abstracted data, respectively. This paper proposed a Fuzzy Soft-max classifier for the dense layer which classifies the output of LSTM Blocks to the associated activity classes. The authors’ decision for proposing this classifier was because sensor data related to the resembling human activities, such as walking and running or opening door and closing door, are often very similar to each other. For this reason, the authors expect that adding fuzzy inference power to the standard Soft-max classifier will increase its accuracy for distinguishing between similar activities. The authors were also interested in considering a post-processing module that considers activity classification over a longer period. Using the proposed Fuzzy Soft-max classifier and by the post-processing technique, the authors were able to reach the 97.03 and 85.1 rates of accuracy for the PAMAP2 and Opportunity dataset, respectively.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 2","pages":"171-186"},"PeriodicalIF":2.0,"publicationDate":"2022-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12066","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84241402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reliable Detection of Doppelgängers based on Deep Face Representations 基于深度人脸表征的Doppelgängers可靠检测
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-01-21 DOI: 10.1049/bme2.12072
C. Rathgeb, Daniel Fischer, P. Drozdowski, C. Busch
{"title":"Reliable Detection of Doppelgängers based on Deep Face Representations","authors":"C. Rathgeb, Daniel Fischer, P. Drozdowski, C. Busch","doi":"10.1049/bme2.12072","DOIUrl":"https://doi.org/10.1049/bme2.12072","url":null,"abstract":"—Doppelg¨angers (or lookalikes) usually yield an in- creased probability of false matches in a facial recognition system, as opposed to random face image pairs selected for non- mated comparison trials. In this work, we assess the impact of doppelg¨angers on the HDA Doppelg¨anger and Disguised Faces in The Wild databases using a state-of-the-art face recognition system. It is found that doppelg¨anger image pairs yield very high similarity scores resulting in a significant increase of false match rates. Further, we propose a doppelg ¨ anger detection method which distinguishes doppelg¨angers from mated comparison trials by analysing differences in deep representations obtained from face image pairs. The proposed detection system employs a machine learning-based classifier, which is trained with generated doppelg¨anger image pairs utilising face morphing techniques. Experimental evaluations conducted on the HDA Doppelg¨anger and Look-Alike Face databases reveal a detection equal error rate of approximately 2.7% for the task of separating mated authentication attempts from doppelg¨angers.","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"73 1","pages":"215-224"},"PeriodicalIF":2.0,"publicationDate":"2022-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81434886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep learning model based on cascaded autoencoders and one-class learning for detection and localization of anomalies from surveillance videos 基于级联自编码器和单类学习的深度学习模型用于监控视频异常的检测和定位
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-01-18 DOI: 10.1049/bme2.12064
Karishma Pawar, Vahida Attar
{"title":"Deep learning model based on cascaded autoencoders and one-class learning for detection and localization of anomalies from surveillance videos","authors":"Karishma Pawar,&nbsp;Vahida Attar","doi":"10.1049/bme2.12064","DOIUrl":"10.1049/bme2.12064","url":null,"abstract":"<p>Due to the need for increased security measures for monitoring and safeguarding the activities, video anomaly detection is considered as one of the significant research aspects in the domain of computer vision. Assigning human personnel to continuously check the surveillance videos for finding suspicious activities such as violence, robbery, wrong U-turns, to mention a few, is a laborious and error-prone task. It gives rise to the need for devising automated video surveillance systems ensuring security. Motivated by the same, this paper addresses the problem of detection and localization of anomalies from surveillance videos using pipelined deep autoencoders and one-class learning. Specifically, we used a convolutional autoencoder and a sequence-to-sequence long short-term memory autoencoder in a pipelined fashion for spatial and temporal learning of the videos, respectively. The authors followed the principle of one-class classification for training the model on normal data and testing it on anomalous testing data. The authors achieved a reasonably significant performance in terms of an equal error rate and the time required for anomaly detection and localization comparable to standard benchmarked approaches, thus, qualifies to work in a near-real-time manner for anomaly detection and localization.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"289-303"},"PeriodicalIF":2.0,"publicationDate":"2022-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12064","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73763275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Signal-level fusion for indexing and retrieval of facial biometric data 基于信号级融合的面部生物特征数据索引与检索
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-01-13 DOI: 10.1049/bme2.12063
Pawel Drozdowski, Fabian Stockhardt, Christian Rathgeb, Christoph Busch
{"title":"Signal-level fusion for indexing and retrieval of facial biometric data","authors":"Pawel Drozdowski,&nbsp;Fabian Stockhardt,&nbsp;Christian Rathgeb,&nbsp;Christoph Busch","doi":"10.1049/bme2.12063","DOIUrl":"10.1049/bme2.12063","url":null,"abstract":"<p>The growing scope, scale, and number of biometric deployments around the world emphasise the need for research into technologies facilitating efficient and reliable biometric identification queries. This work presents a method of indexing biometric databases, which relies on signal-level fusion of facial images (morphing) to create a multi-stage data structure and retrieval protocol. By successively pre-filtering the list of potential candidate identities, the proposed method makes it possible to reduce the necessary number of biometric template comparisons to complete a biometric identification transaction. The proposed method is extensively evaluated on publicly available databases using open-source and commercial off-the-shelf recognition systems. The results show that using the proposed method, the computational workload can be reduced down to around 30% while the biometric performance of a baseline exhaustive search-based retrieval is fully maintained, both in closed-set and open-set identification scenarios.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 2","pages":"141-156"},"PeriodicalIF":2.0,"publicationDate":"2022-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83820934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信