2020 IEEE International Joint Conference on Biometrics (IJCB)最新文献

筛选
英文 中文
Mobile Twin Recognition 移动孪生识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304934
V. Gnatyuk, Alena D. Moskalenko
{"title":"Mobile Twin Recognition","authors":"V. Gnatyuk, Alena D. Moskalenko","doi":"10.1109/IJCB48548.2020.9304934","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304934","url":null,"abstract":"This study focused on one of the most advanced problems in facial recognition - twin differentiation. In recent years, an increasing number of mobile phones have been hacked using the face of the phone owner's sibling/twin, and there are hundreds of videos about this available on the internet. Our main goal is to improve mobile security and protect user data from outside interventions, and therefore we propose a technique which helps to recognize twins to the same extent as humans are able to do so. The main idea involves combining a modern convolutional neural network (CNN) approach with classical handcrafted features, which describe particular characteristics of the human face, such as an asymmetry. Our method was optimized for low performance mobile platforms and it can be simply used by any system with limited resources.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133575462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Characterizing Light-Adapted Pupil Size in the NIR Spectrum 在近红外光谱中表征光适应瞳孔大小
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304928
A. Clark, Sara S. Abdeldayem, Jiayi Li, T. Bourlai
{"title":"Characterizing Light-Adapted Pupil Size in the NIR Spectrum","authors":"A. Clark, Sara S. Abdeldayem, Jiayi Li, T. Bourlai","doi":"10.1109/IJCB48548.2020.9304928","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304928","url":null,"abstract":"Advances in iris recognition discuss the impact of pupil size variations on iris matching accuracy, thus promoting the need to model these covariates that are present in the near infrared (NIR) spectrum to measure their level of authenticity. This work incorporates these principles to propose a novel methodology that automatically distinguishes subject-specific variations of light-adapted pupil size behavior in iris video sequences consisting of two main steps. In the first step, the sinuous nature of the light-adapted pupil size is characterized, which depends on the dilation extrema, the dilative rate of change, and midpoint behavior. In the second and final step, these aspects are fed into a classification framework that distinguishes subject-specific light-adapted pupil size responses. Experimental results, when using the West Virginia University Pupillary Light Reflex Ramp (WVU-PLR Ramp) dataset, convey the efficacy of this approach with classification accuracies ranging from 92% - 100% when support vector machines (SVMs) are used and 100% when neural networks (NNs) are applied. The results of this work can be used to effectively describe light-adapted variations in pupil size. Additionally, these results indicate that the changes in light-adapted pupil size can potentially be a soft iris biometric trait.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"75 277 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125964921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Baracca: a Multimodal Dataset for Anthropometric Measurements in Automotive Baracca:汽车人体测量的多模态数据集
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304903
S. Pini, Andrea D'Eusanio, G. Borghi, R. Vezzani, R. Cucchiara
{"title":"Baracca: a Multimodal Dataset for Anthropometric Measurements in Automotive","authors":"S. Pini, Andrea D'Eusanio, G. Borghi, R. Vezzani, R. Cucchiara","doi":"10.1109/IJCB48548.2020.9304903","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304903","url":null,"abstract":"The recent spread of depth sensors has enabled new methods to automatically estimate anthropometric measurements, in place of manual procedures or expensive 3D scanners. Generally, the use of depth data is limited by the lack of depth-based public datasets containing accurate anthropometric annotations. Therefore, in this paper we propose a new dataset, called Baracca, specifically designed for the automotive context, including in-car and outside views. The dataset is multimodal: it has been acquired with synchronized depth, infrared, thermal and RGB cameras in order to deal with the requirements imposed by the automotive context. In addition, we propose several baselines to test the challenges of the presented dataset and provide considerations for future work.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124217959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
DeformGait: Gait Recognition under Posture Changes using Deformation Patterns between Gait Feature Pairs 变形步态:基于步态特征对之间变形模式的姿态变化下的步态识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304902
Chi Xu, Daisuke Adachi, Yasushi Makihara, Y. Yagi, Jianfeng Lu
{"title":"DeformGait: Gait Recognition under Posture Changes using Deformation Patterns between Gait Feature Pairs","authors":"Chi Xu, Daisuke Adachi, Yasushi Makihara, Y. Yagi, Jianfeng Lu","doi":"10.1109/IJCB48548.2020.9304902","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304902","url":null,"abstract":"In this paper, we propose a unified convolutional neural network (CNN) framework for robust gait recognition against posture changes (e.g., those induced by walking speed changes). In order to mitigate the posture changes, we first register an input matching pair of gait features with different postures by a deformable registration network, which estimates a deformation field to transform the input pair both into their intermediate posture. The pair of the registered features is then fed into a recognition network. Furthermore, ways of the deformation (i.e., deformation patterns) can differ between the same subject pairs (e.g., only posture deformation) and different subject pairs (e.g., not only posture deformation but also body shape deformation), which implies the deformation pattern can be another cue to distinguish the same subject pairs from the different subject pairs. We therefore introduce another recognition network whose input is the deformation pattern. Finally, the deformable registration network, and the two recognition networks for the registered features and the deformation patterns, constitute the whole framework, named DeformGait, and they are trained in an end-to-end manner by minimizing a loss function which is appropriately designed for each of verification and identification scenario. Experiments on the publicly available dataset containing the largest speed variations demonstrate that the proposed method achieves the state-of-the-art performance in both identification and verification scenarios.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127793875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Leveraging edges and optical flow on faces for deepfake detection 利用边缘和光流对人脸进行深度伪造检测
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304936
Akash Chintha, Aishwarya Rao, Saniat Javid Sohrawardi, Kartavya Bhatt, M. Wright, R. Ptucha
{"title":"Leveraging edges and optical flow on faces for deepfake detection","authors":"Akash Chintha, Aishwarya Rao, Saniat Javid Sohrawardi, Kartavya Bhatt, M. Wright, R. Ptucha","doi":"10.1109/IJCB48548.2020.9304936","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304936","url":null,"abstract":"Deepfakes can be used maliciously to sway public opinion, defame an individual, or commit fraud. Hence, it is vital for journalists and social media platforms, as well as the general public, to be able to detect deepfakes. Existing deepfake detection methods, while highly accurate on datasets they have been trained on, falter in open-world scenarios due to different deepfake generations algorithms, video formats, and compression levels. In this paper, we seek to address this by building on the XceptionNet-based deepfake detection technique that utilizes convolutional latent representations with recurrent structures. In particular, we explore how to leverage a combination of visual frames, edge maps, and dense optical flow maps together as inputs to this architecture. We evaluate these techniques using the FaceForensics++ and DFDC-mini datasets. We also perform extensive studies to evaluate the robustness of our network against adversarial post-processing as well as the generalization capabilities to out-of-domain datasets and manipulation strategies. Our methods, which we call XceptionNet*, achieve 100% accuracy on the popular Face-Forensics-s+ dataset and set new benchmark standards on the difficult DFDC-mini dataset. The XceptionNet* models are shown to exhibit superior performance on cross-domain testing and demonstrate surprising resilience to adversarial manipulations.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130011871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Development of deep clustering model to stratify occurrence risk of diabetic foot ulcers based on foot pressure patterns and clinical indices 基于足压模式和临床指标建立糖尿病足溃疡发生风险分层的深度聚类模型
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304917
Xuanchen Ji, Yasuhiro Akiyarna, Yoji Yamada, S. Okamoto, H. Hayashi
{"title":"Development of deep clustering model to stratify occurrence risk of diabetic foot ulcers based on foot pressure patterns and clinical indices","authors":"Xuanchen Ji, Yasuhiro Akiyarna, Yoji Yamada, S. Okamoto, H. Hayashi","doi":"10.1109/IJCB48548.2020.9304917","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304917","url":null,"abstract":"In recent years, the number of patients suffering from diabetes mellitus has continued to increase. When diabetes becomes severe, ulcers may form on the feet of the patient. In the past few years, several researchers have focused on the risk factors and avoidance of ulceration. One effective method to predict the occurrence of diabetic foot ulcers is developing a machine learning model. However, few studies combine both clinical indices and mechanical data as the attributes of the training datasets. In this study, we developed a composite model of a convolutional neural network and K-means clustering to extract features from diabetic patients with or without ulceration as well as healthy individuals. Using a deep clustering model, the center of pressure (CoP) trajectory images were divided into three clusters. Furthermore, we evaluated the performance of the clustering by extracting the features from the CoP trajectory images in each cluster and combining them with the clinical indices of the patients. The results showed that patients with ulcers when walking tend to contact the ground with a narrow area of the plantar and apply a small force. Furthermore, it was found that patients undergoing diabetic neuropathy or with a toe amputation have a high potential of suffering from ulcers.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129891962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Touch Behavior Based Age Estimation Toward Enhancing Child Safety 基于触摸行为的年龄估计提高儿童安全
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304913
M. Hossain, Carl Haberfeld
{"title":"Touch Behavior Based Age Estimation Toward Enhancing Child Safety","authors":"M. Hossain, Carl Haberfeld","doi":"10.1109/IJCB48548.2020.9304913","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304913","url":null,"abstract":"Adult content on the Internet may be accessed by children with only a few keystrokes. While separate child-safe accounts may be established, a better approach could be incorporating automatic age estimation capability into the browser. We envision a safer browsing experience by implementing child-safe browsers combined with Internet content rating similar to the film industry. Before such a browser is created it was necessary to test the age estimation module to see whether acceptable error rates are possible. We created an Android application for collecting biometric touch data, specifically tapping data. We arranged with an elementary school, a middle school, a high school, and a university and collected samples from 262 user sessions (ages 5 to 61). From the tapping data, feature vectors were constructed, which were used to train and test 14 regressors and classifiers. Results for regression show the best mean absolute errors of 3.451 and 3.027 years, respectively, for phones and tablets. Results for classification show the best accuracies of 73.63% and 82.28%, respectively, for phones and tablets. These results demonstrate that age estimation, and hence, a child-safe browser, is feasible, and is a worthwhile objective.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115365270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Long-Term Face Tracking for Crowded Video-Surveillance Scenarios 面向拥挤视频监控场景的长期人脸跟踪
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304892
Germán Barquero, Carles Fernández Tena, I. Hupont
{"title":"Long-Term Face Tracking for Crowded Video-Surveillance Scenarios","authors":"Germán Barquero, Carles Fernández Tena, I. Hupont","doi":"10.1109/IJCB48548.2020.9304892","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304892","url":null,"abstract":"Most current multi-object trackers focus on short-term tracking, and are based on deep and complex systems that do not operate in real-time, often making them impractical for video-surveillance. In this paper, we present a longterm multi-face tracking architecture conceived for working in crowded contexts, particularly unconstrained in terms of movement and occlusions, and where the face is often the only visible part of the person. Our system benefits from advances in the fields of face detection and face recognition to achieve long-term tracking. It follows a tracking-by-detection approach, combining a fast short-term visual tracker with a novel online tracklet reconnection strategy grounded on face verification. Additionally, a correction module is included to correct past track assignments with no extra computational cost. We present a series of experiments introducing novel, specialized metrics for the evaluation of long-term tracking capabilities and a video dataset that we publicly release. Findings demonstrate that, in this context, our approach allows to obtain up to 50% longer tracks than state-of-the-art deep learning trackers.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128364038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On Benchmarking Iris Recognition within a Head-mounted Display for AR/VR Applications AR/VR头戴式显示器中虹膜识别的基准测试
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304919
F. Boutros, N. Damer, K. Raja, Raghavendra Ramachandra, Florian Kirchbuchner, Arjan Kuijper
{"title":"On Benchmarking Iris Recognition within a Head-mounted Display for AR/VR Applications","authors":"F. Boutros, N. Damer, K. Raja, Raghavendra Ramachandra, Florian Kirchbuchner, Arjan Kuijper","doi":"10.1109/IJCB48548.2020.9304919","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304919","url":null,"abstract":"Augmented and virtual reality is being deployed in different fields of applications. Such applications might involve accessing or processing critical and sensitive information, which requires strict and continuous access control. Given that Head-Mounted Displays (HMD) developed for such applications commonly contains internal cameras for gaze tracking purposes, we evaluate the suitability of such setup for verifying the users through iris recognition. In this work, we first evaluate a set of iris recognition algorithms suitable for HMD devices by investigating three well-established handcrafted feature extraction approaches, and to complement it, we also present the analysis using four deep learning models. While taking into consideration the minimalistic hardware requirements of stand-alone HMD, we employ and adapt a recently developed miniature segmentation model (EyeMMS) for segmenting the iris. Further, to account for non-ideal and non-collaborative capture of iris, we define a new iris quality metric that we termed as Iris Mask Ratio (IMR) to quantify the iris recognition performance. Motivated by the performance of iris recognition, we also propose the continuous authentication of users in a non-collaborative capture setting in HMD. Through the experiments on a publicly available OpenEDS dataset, we show that performance with EER = 5% can be achieved using deep learning methods in a general setting, along with high accuracy for continuous user authentication.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122166896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
3D Iris Recognition using Spin Images 使用旋转图像的3D虹膜识别
2020 IEEE International Joint Conference on Biometrics (IJCB) Pub Date : 2020-09-28 DOI: 10.1109/IJCB48548.2020.9304890
Daniel P. Benalcazar, Daniel A. Montecino, Jorge E. Zambrano, C. Pérez, K. Bowyer
{"title":"3D Iris Recognition using Spin Images","authors":"Daniel P. Benalcazar, Daniel A. Montecino, Jorge E. Zambrano, C. Pérez, K. Bowyer","doi":"10.1109/IJCB48548.2020.9304890","DOIUrl":"https://doi.org/10.1109/IJCB48548.2020.9304890","url":null,"abstract":"The high demand for ever more accurate biometric systems has driven the search for methods that reconstruct the iris surface in a 3D model. The intent in adding the depth dimension is to improve accuracy even in large databases. Here, we present a novel approach to iris recognition from 3D models. First, the iris 3D model is reconstructed from a single image using irisDepth, a CNN based method. Then, a 3D descriptor called Spin Image is obtained for keypoints of the 3D model. After that, matches are found between keypoints in the query and the reference 3D models using k-dimensional trees. Finally, those keypoint matches are used to determine the spatial transformation that best aligns the 3D models. A combination of the transformation error and the inlier ratio is used as the metric to assess the similarity of two iris 3D models. We applied this method in a dataset of 100 eyes and 2,000 iris 3D models. Our results indicate that using the proposed method is more effective than alternative methods, such as Dougman's iris code, point-to-point distance between the 3D models, the 3D rubber sheet model, and CNN-based methods.","PeriodicalId":417270,"journal":{"name":"2020 IEEE International Joint Conference on Biometrics (IJCB)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115143658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信