Yaowen Xu, Zhuming Wang, Hu Han, Lifang Wu, Yongluo Liu
{"title":"Exploiting Non-uniform Inherent Cues to Improve Presentation Attack Detection","authors":"Yaowen Xu, Zhuming Wang, Hu Han, Lifang Wu, Yongluo Liu","doi":"10.1109/IJCB52358.2021.9484389","DOIUrl":null,"url":null,"abstract":"Face anti-spoofing plays a vital role in face recognition systems. The existed deep learning approaches have effectively improved the performance of presentation attack detection (PAD). However, they learn a uniform feature for different types of presentation attacks, which ignore the diversity of the inherent cues presented in different spoofing types. As a result, they can not effectively represent the intrinsic difference between different spoof faces and live faces, and the performance drops on the cross-domain databases. In this paper, we introduce the inherent cues of different spoofing types by non-uniform learning as complements to uniform features. Two lightweight sub-networks are designed to learn inherent motion patterns from photo attacks and the inherent texture cues from video attacks. Furthermore, an element-wise weighting fusion strategy is proposed to integrate the non-uniform inherent cues and uniform features. Extensive experiments on four public databases demonstrate that our approach outperforms the state-of-the-art methods and achieves a superior performance of 3.7% ACER in the cross-domain Protocol 4 of the Oulu-NPU database. Code is available at https://github.com/BJUT-VIP/Non-uniform-cues.","PeriodicalId":175984,"journal":{"name":"2021 IEEE International Joint Conference on Biometrics (IJCB)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Joint Conference on Biometrics (IJCB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCB52358.2021.9484389","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Face anti-spoofing plays a vital role in face recognition systems. The existed deep learning approaches have effectively improved the performance of presentation attack detection (PAD). However, they learn a uniform feature for different types of presentation attacks, which ignore the diversity of the inherent cues presented in different spoofing types. As a result, they can not effectively represent the intrinsic difference between different spoof faces and live faces, and the performance drops on the cross-domain databases. In this paper, we introduce the inherent cues of different spoofing types by non-uniform learning as complements to uniform features. Two lightweight sub-networks are designed to learn inherent motion patterns from photo attacks and the inherent texture cues from video attacks. Furthermore, an element-wise weighting fusion strategy is proposed to integrate the non-uniform inherent cues and uniform features. Extensive experiments on four public databases demonstrate that our approach outperforms the state-of-the-art methods and achieves a superior performance of 3.7% ACER in the cross-domain Protocol 4 of the Oulu-NPU database. Code is available at https://github.com/BJUT-VIP/Non-uniform-cues.