IET Biometrics最新文献

筛选
英文 中文
Towards pen-holding hand pose recognition: A new benchmark and a coarse-to-fine PHHP recognition network 面向握笔姿势识别:一个新的基准和粗到精的php识别网络
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-05-17 DOI: 10.1049/bme2.12079
Pingping Wu, Lunke Fei, Shuyi Li, Shuping Zhao, Xiaozhao Fang, Shaohua Teng
{"title":"Towards pen-holding hand pose recognition: A new benchmark and a coarse-to-fine PHHP recognition network","authors":"Pingping Wu,&nbsp;Lunke Fei,&nbsp;Shuyi Li,&nbsp;Shuping Zhao,&nbsp;Xiaozhao Fang,&nbsp;Shaohua Teng","doi":"10.1049/bme2.12079","DOIUrl":"10.1049/bme2.12079","url":null,"abstract":"<p>Hand pose recognition has been one of the most fundamental tasks in computer vision and pattern recognition, and substantial effort has been devoted to this field. However, owing to lack of public large-scale benchmark dataset, there is little literature to specially study pen-holding hand pose (PHHP) recognition. As an attempt to fill this gap, in this paper, a PHHP image dataset, consisting of 18,000 PHHP samples is established. To the best of the authors’ knowledge, this is the largest vision-based PHHP dataset ever collected. Furthermore, the authors design a coarse-to-fine PHHP recognition network consisting of a coarse multi-feature learning network and a fine pen-grasping-specific feature learning network, where the coarse learning network aims to extensively exploit the multiple discriminative features by sharing a hand-shape-based spatial attention information, and the fine learning network further learns the pen-grasping-specific features by embedding a couple of convolutional block attention modules into three convolution blocks models. Experimental results show that the authors’ proposed method can achieve a very competitive PHHP recognition performance when compared with the baseline recognition models.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 6","pages":"581-587"},"PeriodicalIF":2.0,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75725455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognition of human Iris for biometric identification using Daugman’s method 用道格曼方法识别人体虹膜进行生物特征识别
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-05-14 DOI: 10.1049/bme2.12074
Reend Tawfik Mohammed, Harleen Kaur, Bhavya Alankar, Ritu Chauhan
{"title":"Recognition of human Iris for biometric identification using Daugman’s method","authors":"Reend Tawfik Mohammed,&nbsp;Harleen Kaur,&nbsp;Bhavya Alankar,&nbsp;Ritu Chauhan","doi":"10.1049/bme2.12074","DOIUrl":"10.1049/bme2.12074","url":null,"abstract":"<p>Iris identification is a well-known technology used to detect striking biometric identification procedures for recognizing human beings based on physical behaviour. The texture of the iris is unique and its anatomy varies from individual to individual. As we know, the physical features of human beings are unique, and they never change; this has led to a significant development in the field of iris recognition. Iris recognition tends to be a reliable domain of technology as it inherits the random variation of the data. In the proposed study of approach, we have designed and implemented a framework using various subsystems, where each phase relates to the other iris recognition system, and these stages are discussed as segmentation, normalisation, and feature encoding. The study is implemented using MATLAB where the results are outcast using the rapid application development (RAD) approach. We have applied the RAD domain, as it has an excellent computing power to generate expeditious results using complex coding, image processing toolbox, and high-level programing methodology. Further, the performance of the technology is tested on two informational groups of eye images MMU Iris database, CASIA V1, CASIA V2, MICHE I, MICHE II iris database, and images captured by iPhone camera and Android phone. The emphasis on the current study of approach is to apply the proposed algorithm to achieve high performance with less ideal conditions.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 4","pages":"304-313"},"PeriodicalIF":2.0,"publicationDate":"2022-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89044719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Breast mass classification based on supervised contrastive learning and multi-view consistency penalty on mammography 基于监督对比学习和多视点一致性惩罚的乳腺肿块分类
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-05-12 DOI: 10.1049/bme2.12076
Lilei Sun, Jie Wen, Junqian Wang, Zheng Zhang, Yong Zhao, Guiying Zhang, Yong Xu
{"title":"Breast mass classification based on supervised contrastive learning and multi-view consistency penalty on mammography","authors":"Lilei Sun,&nbsp;Jie Wen,&nbsp;Junqian Wang,&nbsp;Zheng Zhang,&nbsp;Yong Zhao,&nbsp;Guiying Zhang,&nbsp;Yong Xu","doi":"10.1049/bme2.12076","DOIUrl":"10.1049/bme2.12076","url":null,"abstract":"<p>Breast cancer accounts for the largest number of patients among all cancers in the world. Intervention treatment for early breast cancer can dramatically extend a woman's 5-year survival rate. However, the lack of public available breast mammography databases in the field of Computer-aided Diagnosis and the insufficient feature extraction ability from breast mammography limit the diagnostic performance of breast cancer. In this paper, A novel classification algorithm based on Convolutional Neural Network (CNN) is proposed to improve the diagnostic performance for breast cancer on mammography. A multi-view network is designed to extract the complementary information between the Craniocaudal (CC) and Mediolateral Oblique (MLO) mammographic views of a breast mass. For the different predictions of the features extracted from the CC view and MLO view of the same breast mass, the proposed algorithm forces the network to extract the consistent features from the two views by the cross-entropy function with an added consistent penalty term. To exploit the discriminative features from the insufficient mammographic images, the authors learnt an encoder in the classification model to learn the invariable representations from the mammographic breast mass by Supervised Contrastive Learning (SCL) to weaken the side effect of colour jitter and illumination of mammographic breast mass on image quality degradation. The experimental results of all the classification algorithms mentioned in this paper on Digital Database for Screening Mammography (DDSM) illustrate that the proposed algorithm greatly improves the classification performance and diagnostic speed of mammographic breast mass, which is of great significance for breast cancer diagnosis.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 6","pages":"588-600"},"PeriodicalIF":2.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89203516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Masked face recognition: Human versus machine 蒙面人脸识别:人类与机器的对抗
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-05-07 DOI: 10.1049/bme2.12077
Naser Damer, Fadi Boutros, Marius Süßmilch, Meiling Fang, Florian Kirchbuchner, Arjan Kuijper
{"title":"Masked face recognition: Human versus machine","authors":"Naser Damer,&nbsp;Fadi Boutros,&nbsp;Marius Süßmilch,&nbsp;Meiling Fang,&nbsp;Florian Kirchbuchner,&nbsp;Arjan Kuijper","doi":"10.1049/bme2.12077","DOIUrl":"10.1049/bme2.12077","url":null,"abstract":"<p>The recent COVID-19 pandemic has increased the focus on hygienic and contactless identity verification methods. However, the pandemic led to the wide use of face masks, essential to keep the pandemic under control. The effect of wearing a mask on face recognition (FR) in a collaborative environment is a currently sensitive yet understudied issue. Recent reports have tackled this by evaluating the masked probe effect on the performance of automatic FR solutions. However, such solutions can fail in certain processes, leading to the verification task being performed by a human expert. This work provides a joint evaluation and in-depth analyses of the face verification performance of human experts in comparison to state-of-the-art automatic FR solutions. This involves an extensive evaluation by human experts and 4 automatic recognition solutions. The study concludes with a set of take-home messages on different aspects of the correlation between the verification behaviour of humans and machines.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 5","pages":"512-528"},"PeriodicalIF":2.0,"publicationDate":"2022-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90566262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Lip print-based identification using traditional and deep learning 基于传统和深度学习的唇印识别
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-05-05 DOI: 10.1049/bme2.12073
Wardah Farrukh, Dustin van der Haar
{"title":"Lip print-based identification using traditional and deep learning","authors":"Wardah Farrukh,&nbsp;Dustin van der Haar","doi":"10.1049/bme2.12073","DOIUrl":"https://doi.org/10.1049/bme2.12073","url":null,"abstract":"<p>The concept of biometric identification is centred around the theory that every individual is unique and has distinct characteristics. Various metrics such as fingerprint, face, iris, or retina are adopted for this purpose. Nonetheless, new alternatives are needed to establish the identity of individuals on occasions where the above techniques are unavailable. One emerging method of human recognition is lip-based identification. It can be treated as a new kind of biometric measure. The patterns found on the human lip are permanent unless subjected to alternations or trauma. Therefore, lip prints can serve the purpose of confirming an individual's identity. The main objective of this work is to design experiments using computer vision methods that can recognise an individual solely based on their lip prints. This article compares traditional and deep learning computer vision methods and how they perform on a common dataset for lip-based identification. The first pipeline is a traditional method with Speeded Up Robust Features with either an SVM or K-NN machine learning classifier, which achieved an accuracy of 95.45% and 94.31%, respectively. A second pipeline compares the performance of the VGG16 and VGG19 deep learning architectures. This approach obtained an accuracy of 91.53% and 93.22%, respectively.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"12 1","pages":"1-12"},"PeriodicalIF":2.0,"publicationDate":"2022-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50121827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Time–frequency fusion learning for photoplethysmography biometric recognition 光体积脉搏波生物识别的时频融合学习
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-04-12 DOI: 10.1049/bme2.12070
Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang
{"title":"Time–frequency fusion learning for photoplethysmography biometric recognition","authors":"Chunying Liu,&nbsp;Jijiang Yu,&nbsp;Yuwen Huang,&nbsp;Fuxian Huang","doi":"10.1049/bme2.12070","DOIUrl":"https://doi.org/10.1049/bme2.12070","url":null,"abstract":"<p>Photoplethysmography (PPG) signal is a novel biometric trait related to the identity of people; many time- and frequency-domain methods for PPG biometric recognition have been proposed. However, the existing domain methods for PPG biometric recognition only consider a single domain or the feature-level fusion of time and frequency domains, without considering the exploration of the fusion correlations of the time and frequency domains. The authors propose a time–frequency fusion for a PPG biometric recognition method with collective matrix factorisation (TFCMF) that leverages collective matrix factorisation to learn a shared latent semantic space by exploring the fusion correlations of the time and frequency domains. In addition, the authors utilise the <i>ℓ</i><sub>2,1</sub> norm to constrain the reconstruction error and shared matrix, which can alleviate the influence of noise and intra-class variation, and ensure the robustness of learnt semantic space. Experiments demonstrate that TFCMF has better recognition performance than current state-of-the-art methods for PPG biometric recognition.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"187-198"},"PeriodicalIF":2.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12070","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91827864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Time-frequency fusion learning for photoplethysmography biometric recognition 光体积脉搏波生物识别的时频融合学习
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-04-12 DOI: 10.1049/bme2.12070
Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang
{"title":"Time-frequency fusion learning for photoplethysmography biometric recognition","authors":"Chunying Liu, Jijiang Yu, Yuwen Huang, Fuxian Huang","doi":"10.1049/bme2.12070","DOIUrl":"https://doi.org/10.1049/bme2.12070","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"8 1","pages":"187-198"},"PeriodicalIF":2.0,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86318191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using double attention for text tattoo localisation 利用双重注意力进行文字纹身定位
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-04-08 DOI: 10.1049/bme2.12071
Xingpeng Xu, S. Prasad, Kuanhong Cheng, A. Kong
{"title":"Using double attention for text tattoo localisation","authors":"Xingpeng Xu, S. Prasad, Kuanhong Cheng, A. Kong","doi":"10.1049/bme2.12071","DOIUrl":"https://doi.org/10.1049/bme2.12071","url":null,"abstract":"","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"20 1","pages":"199-214"},"PeriodicalIF":2.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90173018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using double attention for text tattoo localisation 利用双重注意力进行文字纹身定位
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-04-08 DOI: 10.1049/bme2.12071
Xingpeng Xu, Shitala Prasad, Kuanhong Cheng, Adams Wai Kin Kong
{"title":"Using double attention for text tattoo localisation","authors":"Xingpeng Xu,&nbsp;Shitala Prasad,&nbsp;Kuanhong Cheng,&nbsp;Adams Wai Kin Kong","doi":"10.1049/bme2.12071","DOIUrl":"https://doi.org/10.1049/bme2.12071","url":null,"abstract":"<p>Text tattoos contain rich information about an individual for forensic investigation. To extract this information, text tattoo localisation is the first and essential step. Previous tattoo studies applied existing object detectors to detect general tattoos, but none of them considered text tattoo localisation and they neglect the prior knowledge that text tattoos are usually inside or nearby larger tattoos and appear only on human skin. To use this prior knowledge, a prior knowledge-based attention mechanism (PKAM) and a network named Text Tattoo Localisation Network based on Double Attention (TTLN-DA) are proposed. In addition to TTLN-DA, two variants of TTLN-DA are designed to study the effectiveness of different prior knowledge. For this study, NTU Tattoo V2, the largest tattoo dataset and NTU Text Tattoo V1, the largest text tattoo dataset are established. To examine the importance of the prior knowledge and the effectiveness of the proposed attention mechanism and the networks, TTLN-DA and its variants are compared with state-of-the-art object detectors and text detectors. The experimental results indicate that the prior knowledge is vital for text tattoo localisation; The PKAM contributes significantly to the performance and TTLN-DA outperforms the state-of-the-art object detectors and scene text detectors.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"199-214"},"PeriodicalIF":2.0,"publicationDate":"2022-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91813390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reliable detection of doppelgängers based on deep face representations 基于深度人脸表征的doppelgängers可靠检测
IF 2 4区 计算机科学
IET Biometrics Pub Date : 2022-04-04 DOI: 10.1049/bme2.12072
Christian Rathgeb, Daniel Fischer, Pawel Drozdowski, Christoph Busch
{"title":"Reliable detection of doppelgängers based on deep face representations","authors":"Christian Rathgeb,&nbsp;Daniel Fischer,&nbsp;Pawel Drozdowski,&nbsp;Christoph Busch","doi":"10.1049/bme2.12072","DOIUrl":"https://doi.org/10.1049/bme2.12072","url":null,"abstract":"<p>Doppelgängers (or lookalikes) usually yield an increased probability of false matches in a facial recognition system, as opposed to random face image pairs selected for non-mated comparison trials. In this work, the impact of doppelgängers on the HDA Doppelgänger and Disguised Faces in The Wild databases is assessed using a state-of-the-art face recognition system. It is found that doppelgänger image pairs yield very high similarity scores resulting in a significant increase of false match rates. Further, a doppelgänger detection method is proposed, which distinguishes doppelgängers from mated comparison trials by analysing differences in deep representations obtained from face image pairs. The proposed detection system employs a machine learning-based classifier, which is trained with generated doppelgänger image pairs utilising face morphing techniques. Experimental evaluations conducted on the HDA Doppelgänger and Look-Alike Face databases reveal a detection equal error rate of approximately 2.7% for the task of separating mated authentication attempts from doppelgängers.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"11 3","pages":"215-224"},"PeriodicalIF":2.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2.12072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91797536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信