Panfeng An, Zhiyong Yuan, Jianhui Zhao, Xue Jiang, Zengmao Wang, Bo Du
{"title":"Multi-subband and Multi-subepoch Time Series Feature Learning for EEG-based Sleep Stage Classification","authors":"Panfeng An, Zhiyong Yuan, Jianhui Zhao, Xue Jiang, Zengmao Wang, Bo Du","doi":"10.1109/IJCB52358.2021.9484344","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484344","url":null,"abstract":"EEG plays an important role in the analysis and recognition of brain activity, and which has great potential in the field of biometrics, while EEG-based time series classification is complicated and difficult due to the nonstationary characteristics and individual difference. In this paper, we investigate the EEG signal classification problem and propose a multi-subband and multi-subepoch time series feature learning (MMTSFL) method for automatic sleep stage classification. Specifically, MMTSFL first decomposes multiple subbands with various frequency from raw EEG signals and partitions the obtained subbands in-to multiple consecutive subepochs, and then employs time series feature learning to obtain effective discriminant features. Moreover, amplitude-time based signal features are extracted from each subepoch to represent dynamic variation of EEG signals, and MMTSFL conduct further multipurpose feature learning for specific features, consistent features and temporal features simultaneously. Experiment results on three classification tasks of sleep quality evaluation, fatigue detection and sleep disease diagnosis demonstrate the superiority of the proposed method.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127824366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Concealable Biometric-based Continuous User Authentication System An EEG Induced Deep Learning Model","authors":"S. Gopal, Diksha Shukla","doi":"10.1109/IJCB52358.2021.9484345","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484345","url":null,"abstract":"This paper introduces a lightweight, low-cost, easy-to-use, and unobtrusive continuous user authentication system based on concealable biometric signals. The proposed authentication model continuously verifies a user’s identity throughout the user session while s/he watches a video or performs free-text typing on his/her desktop/laptop keyboard. The authentication model utilizes unobtrusively recorded electroencephalogram (EEG) signals and learns the user’s unique biometric signature based on his/her brain activity.Our work has multifold impact in the area of EEG-based authentication: (1) a comprehensive study and a comparative analysis of a wide range of extracted features are presented. These features are categorized based on the EEG electrodes placement position on the user’s head, (2) an optimal feature subset is constructed using a minimal number of EEG electrodes, (3) a deep neural network-based user authentication model is presented that utilizes the constructed optimal feature subset, and (4) a detailed experimental analysis on a publicly available EEG dataset of 26 volunteer participants is presented.Our experimental results show that the proposed authentication model could achieve an average Equal Error Rate (EER) of 0.137%. Although a thorough analysis on a larger pool of subjects must be performed, our results show the viability of low-cost, lightweight EEG-based continuous user authentication systems.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128008915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Avoiding Spectacles Reflections on Iris Images Using A Ray-tracing Method","authors":"Yu Tian, Kunbo Zhang, Leyuan Wang, Chong Zhang","doi":"10.1109/IJCB52358.2021.9484402","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484402","url":null,"abstract":"Spectacles reflection removal is a challenging problem in iris recognition research. The reflection of the spectacles usually contaminates the iris image acquired under infrared illumination. The intense light reflection caused by the active light source makes reflection removal more challenging than normal scenes since important iris texture features are entirely obscured. Eliminating unnecessary reflections can effectively improve iris recognition system performance. This paper proposes a spectacle reflection removal algorithm based on ray coding and ray tracking to remove spectacle reflection in iris images. By decoding the light source’s encoded light beam, the iris imaging device eliminates most of the stray light. Our binocular imaging device tracks the light path to obtain parallax information and realizes reflected light spot removal through image fusion. We designed a prototype system to verify our proposed method in this paper. This method can effectively eliminate reflections without changing iris texture and improve iris recognition in complex scenarios.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125457349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Practical Face Swapping Detection Based on Identity Spatial Constraints","authors":"Jun Jiang, Bo Wang, Bing Li, Weiming Hu","doi":"10.1109/IJCB52358.2021.9484396","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484396","url":null,"abstract":"The generalization of face swapping detectors against unseen face manipulation methods is important to practical applications. Most existing methods based on convolutional neural networks (CNN) simply map the facial images to real/fake binary labels and achieve high performance on the known forgeries, but they almost fail to detect new manipulation methods. In order to improve the generalization of face swapping detection, this work concentrates on a practical scenario to protect specific persons by proposing a novel face swapping detector requiring a reference image. To this end, we design a new detection framework based on identity spatial constraints (DISC), which consists of a backbone network and an identity semantic encoder (ISE). When inspecting an image of a particular person, the ISE utilizes a real facial image of that person as the reference to constrain the backbone to focus on the identity-related facial areas, so as to exploit the intrinsic discriminative clues to the forgery in the query image. Cross-dataset evaluations on five large-scale face forgery datasets show that DISC significantly improves the performance against unseen manipulation methods and is robust against the distortions. Compared to the existing detection methods, the AUC scores achieve 10%~40% performance improvements.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129281022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BioCanCrypto: An LDPC Coded Bio-Cryptosystem on Fingerprint Cancellable Template","authors":"Xingbo Dong, Zhe Jin, Leshan Zhao, Zhenhua Guo","doi":"10.1109/IJCB52358.2021.9484391","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484391","url":null,"abstract":"Biometrics as a means of personal authentication has demonstrated strong viability in the past decade. However, directly deriving a unique cryptographic key from biometric data is a non-trivial task due to the fact that biometric data is usually noisy and presents large intra-class variations. Moreover, biometric data is permanently associated with the user, which leads to security and privacy issues. Cancellable biometrics and bio-cryptosystem are two main branches to address those issues, yet both approaches fall short in terms of accuracy performance, security, and privacy. In this paper, we propose a Bio-Crypto system on fingerprint Cancellable template (Bio-CanCrypto), which bridges cancellable biometrics and bio-cryptosystem to achieve a middle-ground for alleviating the limitations of both. Specifically, a cancellable transformation is applied on a fixed-length fingerprint feature vector to generate cancellable templates. Next, an LDPC coding mechanism is introduced into a reusable fuzzy extractor scheme and used to extract the stable cryptographic key from the generated cancellable templates. The proposed system can achieve both cancellability and reusability in one scheme. Experiments are conducted on a public fingerprint dataset, i.e., FVC2002. The results demonstrate that the proposed LDPC coded reusable fuzzy extractor is effective and promising.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131446968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying Rhythmic Patterns for Face Forgery Detection and Categorization","authors":"Jiahao Liang, Weihong Deng","doi":"10.1109/IJCB52358.2021.9484400","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484400","url":null,"abstract":"With the emergence of GAN, face forgery technologies have been heavily abused. Achieving accurate face forgery detection is imminent. Inspired by remote photoplethysmography (rPPG) that PPG signal corresponds to the periodic change of skin color caused by heartbeat in face videos, we observe that despite the inevitable loss of PPG signal during the forgery process, there is still a mixture of PPG signals in the forgery video with a unique rhythmic pattern depending on its generation method. Motivated by this key observation, we propose a two-stage network for face forgery detection and categorization consisting of: 1) a Spatial-Temporal Filter Module (STFM) for PPG signals filtering, and 2) an Adjacency Interaction Module (AIM) for constraint and interaction of PPG signals. Moreover, with insight into the generation of forgery methods, we further propose Spatial-Temporal Mixup (ST-Mixup) to boost the performance of the network. Overall, extensive experiments have proved the superiority of our method.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131764999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Miyamoto, H. Hashimoto, Akihiro Hayasaka, Akinori F. Ebihara, Hitoshi Imaoka
{"title":"Joint Feature Distribution Alignment Learning for NIR-VIS and VIS-VIS Face Recognition","authors":"T. Miyamoto, H. Hashimoto, Akihiro Hayasaka, Akinori F. Ebihara, Hitoshi Imaoka","doi":"10.1109/IJCB52358.2021.9484385","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484385","url":null,"abstract":"Face recognition for visible light (VIS) images achieve high accuracy thanks to the recent development of deep learning. However, heterogeneous face recognition (HFR), which is a face matching in different domains, is still a difficult task due to the domain discrepancy and lack of large HFR dataset. Several methods have attempted to reduce the domain discrepancy by means of fine-tuning, which causes significant degradation of the performance in the VIS domain because it loses the highly discriminative VIS representation. To overcome this problem, we propose joint feature distribution alignment learning (JFDAL) which is a joint learning approach utilizing knowledge distillation. It enables us to achieve high HFR performance with retaining the original performance for the VIS domain. Extensive experiments demonstrate that our proposed method delivers statistically significantly better performances compared with the conventional fine-tuning approach on a public HFR dataset Oulu-CASIA NIR&VIS and popular verification datasets in VIS domain such as FLW, CFP, AgeDB. Furthermore, comparative experiments with existing state-of-the-art HFR methods show that our method achieves a comparable HFR performance on the Oulu-CASIA NIR&VIS dataset with less degradation of VIS performance.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116023568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from Program Chairs of IJCB 2021","authors":"","doi":"10.1109/ijcb52358.2021.9521651","DOIUrl":"https://doi.org/10.1109/ijcb52358.2021.9521651","url":null,"abstract":"","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115702245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caiyong Wang, Yunlong Wang, Kunbo Zhang, Jawad Muhammad, T. Lu, Qi Zhang, Q. Tian, Zhaofeng He, Zhenan Sun, Yiwen Zhang, Tian Liu, Wei Yang, Dongliang Wu, Yingfeng Liu, Ruiye Zhou, Huihai Wu, Hao Zhang, Junbao Wang, Jiayi Wang, Wantong Xiong, Xueyu Shi, Shaogeng Zeng, Peihua Li, Haodong Sun, Jing Wang, Jiale Zhang, Qi Wang, Huijie Wu, Xinhui Zhang, Haiqing Li, Yu Chen, Liang Chen, Menghan Zhang, Ye Sun, Zhiyong Zhou, F. Boutros, N. Damer, Arjan Kuijper, Juan E. Tapia, A. Valenzuela, C. Busch, G. Gupta, K. Raja, Xi Wu, Xiaojie Li, Jingfu Yang, Hongyan Jing, Xin Wang, B. Kong, Youbing Yin, Qi Song, Siwei Lyu, Shu Hu, L. Premk, Matej Vitek, Vitomir Štruc, P. Peer, J. Khiarak, F. Jaryani, Samaneh Salehi Nasab, S. N. Moafinejad, Y. Amini, M. Noshad
{"title":"NIR Iris Challenge Evaluation in Non-cooperative Environments: Segmentation and Localization","authors":"Caiyong Wang, Yunlong Wang, Kunbo Zhang, Jawad Muhammad, T. Lu, Qi Zhang, Q. Tian, Zhaofeng He, Zhenan Sun, Yiwen Zhang, Tian Liu, Wei Yang, Dongliang Wu, Yingfeng Liu, Ruiye Zhou, Huihai Wu, Hao Zhang, Junbao Wang, Jiayi Wang, Wantong Xiong, Xueyu Shi, Shaogeng Zeng, Peihua Li, Haodong Sun, Jing Wang, Jiale Zhang, Qi Wang, Huijie Wu, Xinhui Zhang, Haiqing Li, Yu Chen, Liang Chen, Menghan Zhang, Ye Sun, Zhiyong Zhou, F. Boutros, N. Damer, Arjan Kuijper, Juan E. Tapia, A. Valenzuela, C. Busch, G. Gupta, K. Raja, Xi Wu, Xiaojie Li, Jingfu Yang, Hongyan Jing, Xin Wang, B. Kong, Youbing Yin, Qi Song, Siwei Lyu, Shu Hu, L. Premk, Matej Vitek, Vitomir Štruc, P. Peer, J. Khiarak, F. Jaryani, Samaneh Salehi Nasab, S. N. Moafinejad, Y. Amini, M. Noshad","doi":"10.1109/IJCB52358.2021.9484336","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484336","url":null,"abstract":"For iris recognition in non-cooperative environments, iris segmentation has been regarded as the first most important challenge still open to the biometric community, affecting all downstream tasks from normalization to recognition. In recent years, deep learning technologies have gained significant popularity among various computer vision tasks and also been introduced in iris biometrics, especially iris segmentation. To investigate recent developments and attract more interest of researchers in the iris segmentation method, we organized the 2021 NIR Iris Challenge Evaluation in Non-cooperative Environments: Segmentation and Localization (NIR-ISL 2021) at the 2021 International Joint Conference on Biometrics (IJCB 2021). The challenge was used as a public platform to assess the performance of iris segmentation and localization methods on Asian and African NIR iris images captured in non-cooperative environments. The three best-performing entries achieved solid and satisfactory iris segmentation and localization results in most cases, and their code and models have been made publicly available for reproducibility research.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130206555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contrastive Self-supervised Learning for Sensor-based Human Activity Recognition","authors":"Bulat Khaertdinov, E. Ghaleb, S. Asteriadis","doi":"10.1109/IJCB52358.2021.9484410","DOIUrl":"https://doi.org/10.1109/IJCB52358.2021.9484410","url":null,"abstract":"Deep Learning models, applied to a sensor-based Human Activity Recognition task, usually require vast amounts of annotated time-series data to extract robust features. However, annotating signals coming from wearable sensors can be a tedious and, often, not so intuitive process, that requires specialized tools and predefined scenarios, making it an expensive and time-consuming task. This paper combines one of the most recent advances in Self-Supervised Leaning (SSL), namely a SimCLR framework, with a powerful transformer-based encoder to introduce a Contrastive Self-supervised learning approach to Sensor-based Human Activity Recognition (CSSHAR) that learns feature representations from unlabeled sensory data. Extensive experiments conducted on three widely used public datasets have shown that the proposed method outperforms recent SSL models. Moreover, CSSHAR is capable of extracting more robust features than the identical supervised transformer when transferring knowledge from one dataset to another as well as when very limited amounts of annotated data are available.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122544092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}