{"title":"Patch-Swap Based Approach for Face Anti-Spoofing Enhancement","authors":"Qiushi Guo, Shisha Liao, Yifan Chen, Shihua Xiao, Jin Ma, Tengteng Zhang","doi":"10.1109/TENSYMP55890.2023.10223630","DOIUrl":null,"url":null,"abstract":"Face Recognition system is widely used in recent years, however it is still vulnerable to various attacks, ranging from 2D presentation attacks(PA) to 3D masks attacks. Among them, part-cut print paper attack is an easy-of-use yet challenging fraud approach, which can imitate eyes-blick and mouth-open actions that are commonly used as living clues in face recognition system. Besides, the wide range of materials of print papers makes the task even harder. Existing approaches neglect to decouple the structure features from paper types. Though attack images which are similar to the training data can be detected, the accuracy drops dramatically when tested on unseen paper types. However, it's impossible to collect a face dataset covering all types of paper materials with sufficient identities. To alleviate these issues, we propose a Patch-Swap module, which generates synthetic images simulating part-cut print paper attacks. We randomly take two images from CelebA-HQ, crop the patches of eyes and mouth and swap the above patches respectively. With no extra images collected and annotated, the whole process is efficient and effective. We then train a resnet-based model PSRes with our synthetic data. To prove the robustness and effectiveness of our approach, we conduct several experiments on public datasets Rose and CelebA-Spoof, the results show that our PSRes outperforms the existing methods. Besides, our approach show better generalization capability even tested on unseen materials.","PeriodicalId":314726,"journal":{"name":"2023 IEEE Region 10 Symposium (TENSYMP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Region 10 Symposium (TENSYMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TENSYMP55890.2023.10223630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Face Recognition system is widely used in recent years, however it is still vulnerable to various attacks, ranging from 2D presentation attacks(PA) to 3D masks attacks. Among them, part-cut print paper attack is an easy-of-use yet challenging fraud approach, which can imitate eyes-blick and mouth-open actions that are commonly used as living clues in face recognition system. Besides, the wide range of materials of print papers makes the task even harder. Existing approaches neglect to decouple the structure features from paper types. Though attack images which are similar to the training data can be detected, the accuracy drops dramatically when tested on unseen paper types. However, it's impossible to collect a face dataset covering all types of paper materials with sufficient identities. To alleviate these issues, we propose a Patch-Swap module, which generates synthetic images simulating part-cut print paper attacks. We randomly take two images from CelebA-HQ, crop the patches of eyes and mouth and swap the above patches respectively. With no extra images collected and annotated, the whole process is efficient and effective. We then train a resnet-based model PSRes with our synthetic data. To prove the robustness and effectiveness of our approach, we conduct several experiments on public datasets Rose and CelebA-Spoof, the results show that our PSRes outperforms the existing methods. Besides, our approach show better generalization capability even tested on unseen materials.