{"title":"Face Liveness Detection by rPPG Features and Contextual Patch-Based CNN","authors":"Bofan Lin, Xiaobai Li, Zitong Yu, Guoying Zhao","doi":"10.1145/3345336.3345345","DOIUrl":"https://doi.org/10.1145/3345336.3345345","url":null,"abstract":"Face anti-spoofing plays a vital role in security systems including face payment systems and face recognition systems. Previous studies showed that live faces and presentation attacks have significant differences in both remote photoplethysmography (rPPG) and texture information, we propose a generalized method exploiting both rPPG and texture features for face anti-spoofing task. First, multi-scale long-term statistical spectral (MS-LTSS) features with variant granularities are designed for representation of rPPG information. Second, a contextual patch-based convolutional neural network (CP-CNN) is used for extracting global-local and multi-level deep texture features simultaneously. Finally, weight summation strategy is employed for decision level fusion, which helps to generalize the method for not only print attack and replay attack but also mask attack. Comprehensive experiments were conducted on five databases, namely 3DMAD, HKBU-Mars VI, MSU-MFSD, CASIA-FASD, and OULU-NPU, to show the superior results of the proposed method compared with state-of-the-art methods.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130585341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SmartHandle: A Novel Behavioral Biometric-based Authentication Scheme for Smart Lock Systems","authors":"Sandeep Gupta, Attaullah Buriro, B. Crispo","doi":"10.1145/3345336.3345344","DOIUrl":"https://doi.org/10.1145/3345336.3345344","url":null,"abstract":"Over recent years, smart locks have evolved as cyber-physical devices that can be operated by digital keypads, physiological biometrics sensors, smart-card readers, or mobile devices pairing, to secure door access. However, the underlying authentication schemes, i.e., knowledge-based (e.g., PIN/passwords), possession-based (e.g., smartphones, smart cards), or physiological biometric-based (e.g., fingerprint, face), utilized in smart locks, have shown several drawbacks. Studies have determined that these authentication schemes are vulnerable to various attacks as well as lack usability. This paper presents SmartHandle - a novel behavioral biometric-based transparent user authentication scheme for smart locks that exploits users' hand-movement while they rotate the door handle to unlock the door. More specifically, our solution models the user's hand-movement in 3-dimensional space by fetching the X, Y, and Z coordinates from 3 sensors, namely, accelerometer, magnetometer, and gyroscope corresponding to the hand-movement trajectory, to generate a user-identification-signature. We validated our solution for a multi-class classification scenario and achieve a True Acceptance Rate (TAR) of 87.27% at the False Acceptance Rate (FAR) of 1.39% with the Linear Discriminant Classifier (LDC) on our collected dataset from 11 users. The solution can be easily deployed at the main entrance of homes and offices offering a secure and usable authentication scheme to their legitimate users.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116749418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Stauffer, Paul Maergner, Andreas Fischer, Kaspar Riesen
{"title":"Graph Embedding for Offline Handwritten Signature Verification","authors":"Michael Stauffer, Paul Maergner, Andreas Fischer, Kaspar Riesen","doi":"10.1145/3345336.3345346","DOIUrl":"https://doi.org/10.1145/3345336.3345346","url":null,"abstract":"Due to the high availability and applicability, handwritten signatures are an eminent biometric authentication measure in our life. To mitigate the risk of a potential misuse, automatic signature verification tries to distinguish between genuine and forged signatures. Most of the available signature verification approaches make use of vectorial rather than graph-based representations of the handwriting. This is rather surprising as graphs offer some inherent advantages. Graphs are, for instance, able to directly adapt their size and structure to the size and complexity of the respective handwritten entities. Moreover, several fast graph matching algorithms have been proposed recently that allow to employ graphs also in domains with large amounts of data. The present paper proposes to use different graph embedding approaches in conjunction with a recent graph-based signature verification framework. That is, signature graphs are not directly matched with each other, but first compared with a set of predefined prototype graphs, in order to obtain a dissimilarity representation. In an experimental evaluation, we employ the proposed method on two widely used benchmark datasets. On both datasets, we empirically confirm that the learning-free graph embedding outperforms state-of-the-art methods with respect to both accuracy and runtime.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115738556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Liars in Chats using Keystroke Dynamics","authors":"Parisa Rezaee Borj, Patrick A. H. Bours","doi":"10.1145/3345336.3345337","DOIUrl":"https://doi.org/10.1145/3345336.3345337","url":null,"abstract":"In this paper we will investigate the possibilities for detecting liars in chat rooms who have taken on a different identity. While using a different identity people might require more time to reply to questions of the chat partner, or might use corrections to change their text to avoid inconsistencies in their answers. These issues will cause differences in the typing behavior, which can be measured in the typing rhythm. We have shown in this paper that, with a high accuracy, we can distinguish between a chat of a person who uses his/her own identity and is honest in his/her answers, and a chat of a person who is lying because his/her answers need to be consistent to an assumed identity. We obtained a correct classification of a single message in a chat with an accuracy of more than 70% and a correct classification of a full chat with well over 90% accuracy.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128850562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-database Micro-Expression Recognition with Deep Convolutional Networks","authors":"Zhaoqiang Xia, Huan Liang, Xiaopeng Hong, Xiaoyi Feng","doi":"10.1145/3345336.3345343","DOIUrl":"https://doi.org/10.1145/3345336.3345343","url":null,"abstract":"Micro-expression recognition (MER) is attracting more and more interests as it has important applications for analyzing human behaviors. Since the recognition ability for individual datasets has been improved greatly, few works have been devoted to the cross database task of MER, which is more challenging for capturing the subtle changes of micro-expressions from different environments. In this paper, we employ an end-to-end deep model for learning the representation and classifier automatically. In the deep model, the recurrent convolutional layers are utilized to exploit the learning ability with the optical flow fields of micro-expression sequences, which are enhanced by a motion magnification procedure. To ease the influence of samples from different datasets (environments), we present three normalization methods (i.e., sample-wise, subject-wise and dataset-wise methods) to restrain the variations of samples. The experiments are performed on the cross database of MERC2019 challenge, and achieve comparative performance than the baseline method.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"73 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123115787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiangning Gao, M. Hansen, Melvyn L. Smith, A. Evans
{"title":"Surface Normals Based Landmarking for 3D Face Recognition Using Photometric Stereo Captures","authors":"Jiangning Gao, M. Hansen, Melvyn L. Smith, A. Evans","doi":"10.1145/3345336.3345339","DOIUrl":"https://doi.org/10.1145/3345336.3345339","url":null,"abstract":"In recent decades, many 3D data acquisition methods have been developed to provide accurate and cost-effective 3D captures of the human face. An example system, which can accommodate both research and commercial applications, is the Photoface device. Photoface is based on the photometric stereo imaging technique. To improve the recognition performance using Photoface captures, a novel landmarking algorithm is first proposed by thresholding surface normals maps. The development of landmarking algorithms specifically for photometric stereo captures enables region-based feature extraction and fills a gap in the 3D face landmarking literature. Nasal curves and spherical patches are then used respectively for recognition and are evaluated on the 3DE-VISIR database, which contains Photoface captures with expressions. The neutral vs. non-neutral matching results demonstrate high face recognition performance using spherical patches and a KFA classifier, achieving a R1RR of 97.26% when only 24 patches are selected for matching.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131421406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Factors on BVP Signal Extraction Based on Imaging Principle","authors":"Xiaobiao Zhang, Xiaoyi Feng, Zhaoqiang Xia","doi":"10.1145/3345336.3345342","DOIUrl":"https://doi.org/10.1145/3345336.3345342","url":null,"abstract":"The extraction of human physiological based on face video has become a hot research direction, but few researchers pay attention to the process of signal extraction from an optical perspective. This paper establishes an optical model for human skin, and analyzes the principle of extracting human physiological signals from face video by imaging. Based on this model, the effects of melanin and hemoglobin on BVP (blood volume pulse) signal extraction were analyzed. In addition, this paper introduces the model of camera imaging, and discusses the cause of noise generation, analysis of the impact of two kinds of noise on video quality. Finally, this paper uses the MAHNOB database to carry out the heart rate extraction experiment from video, the experiment concluded that: (1) The face with lighter skin color is more conducive to human heart rate extraction; (2) Reducing the signal-noise ratio through compressing the video quality, the impact on the heart rate estimation error reaches minimum as the signal-noise ratio is 30dB; (3) By compressing the original video resolution, color information in the video is reduced, which has a impact on BVP signals extraction, the information recovery can be performed by the method of super-resolution reconstruction.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"23 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121106427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subjective Versus Objective Face Image Quality Evaluation For Face Recognition","authors":"Ali Khodabakhsh, Marius Pedersen, C. Busch","doi":"10.1145/3345336.3345338","DOIUrl":"https://doi.org/10.1145/3345336.3345338","url":null,"abstract":"The performance of any face recognition system gets affected by the quality of the probe and the reference images. Rejecting or recapturing images with low-quality can improve the overall performance of the biometric system. There are many statistical as well as learning-based methods that provide quality scores given an image for the task of face recognition. In this study, we take a different approach by asking 26 participants to provide subjective quality scores that represent the ease of recognizing the face on the images from a smartphone based face image dataset. These scores are then compared to measures implemented from ISO/IEC TR 29794-5. We observe that the subjective scores outperform the implemented objective scores while having a low correlation with them. Furthermore, we analyze the effect of pose, illumination, and distance on face recognition similarity scores as well as the generated mean opinion scores.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132580433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Babak Maser, Jutta Hämmerle-Uhl, T. Lipowski, A. Uhl
{"title":"Finger Vein Image Compression with Uniform Background","authors":"Babak Maser, Jutta Hämmerle-Uhl, T. Lipowski, A. Uhl","doi":"10.1145/3345336.3345347","DOIUrl":"https://doi.org/10.1145/3345336.3345347","url":null,"abstract":"We propose to replace the background data in finger vein imagery by uniform gray data and implications on (i) achieved lossless compression performance and (ii) obtained recognition accuracy in case of lossy compression are determined to employ 2 public datasets. Results indicate that replacement of original background by uniform one is definitely profitable for lossless compression, while the lossy case with impact on recognition accuracy has to be handled with caution as introduced sharp edges between finger area and background lead to artifacts which in turn degrade recognition performance. After having smoothed those areas, recognition performance is improved when replacing background for all settings.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"66 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120969579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning of Brain Connectivity Features for EEG-based Person Identification","authors":"N. Nyah, Nikolaos Christou","doi":"10.1145/3345336.3345340","DOIUrl":"https://doi.org/10.1145/3345336.3345340","url":null,"abstract":"The brain activity observed on multiple EEG electrodes is influenced by volume conductance and functional connectivity of a person performing a task. When the task is a biometric test, EEG signals represent the unique 'brain print' which is genetically defined by the functional connectivity that is represented by interactions between the electrodes, whilst the conductance component causes trivial correlations in EEG signals. Orthogonalisation using autoregressive modelling minimises the conductance component, and the connectivity features can be then extracted from the residuals. However, the results cannot be reliable for high-dimensional EEG data recorded via a multi-electrode system. The proposed method shows that the dimensionality can be significantly reduced if baselines that are required for estimating the residuals can be modelled by using EEG electrodes that make important contribution to the functional connectivity. The results show that the required models can be learnt by Machine Learning techniques which are capable of providing the maximal performance in the case of multidimensional EEG data. The study which has been conducted on a EEG benchmark including 109 participants shows a significant improvement of the identification accuracy.","PeriodicalId":262849,"journal":{"name":"International Conference on Biometrics Engineering and Application","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117052417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}