基于补丁交换的人脸抗欺骗增强方法

Qiushi Guo, Shisha Liao, Yifan Chen, Shihua Xiao, Jin Ma, Tengteng Zhang
{"title":"基于补丁交换的人脸抗欺骗增强方法","authors":"Qiushi Guo, Shisha Liao, Yifan Chen, Shihua Xiao, Jin Ma, Tengteng Zhang","doi":"10.1109/TENSYMP55890.2023.10223630","DOIUrl":null,"url":null,"abstract":"Face Recognition system is widely used in recent years, however it is still vulnerable to various attacks, ranging from 2D presentation attacks(PA) to 3D masks attacks. Among them, part-cut print paper attack is an easy-of-use yet challenging fraud approach, which can imitate eyes-blick and mouth-open actions that are commonly used as living clues in face recognition system. Besides, the wide range of materials of print papers makes the task even harder. Existing approaches neglect to decouple the structure features from paper types. Though attack images which are similar to the training data can be detected, the accuracy drops dramatically when tested on unseen paper types. However, it's impossible to collect a face dataset covering all types of paper materials with sufficient identities. To alleviate these issues, we propose a Patch-Swap module, which generates synthetic images simulating part-cut print paper attacks. We randomly take two images from CelebA-HQ, crop the patches of eyes and mouth and swap the above patches respectively. With no extra images collected and annotated, the whole process is efficient and effective. We then train a resnet-based model PSRes with our synthetic data. To prove the robustness and effectiveness of our approach, we conduct several experiments on public datasets Rose and CelebA-Spoof, the results show that our PSRes outperforms the existing methods. Besides, our approach show better generalization capability even tested on unseen materials.","PeriodicalId":314726,"journal":{"name":"2023 IEEE Region 10 Symposium (TENSYMP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Patch-Swap Based Approach for Face Anti-Spoofing Enhancement\",\"authors\":\"Qiushi Guo, Shisha Liao, Yifan Chen, Shihua Xiao, Jin Ma, Tengteng Zhang\",\"doi\":\"10.1109/TENSYMP55890.2023.10223630\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Face Recognition system is widely used in recent years, however it is still vulnerable to various attacks, ranging from 2D presentation attacks(PA) to 3D masks attacks. Among them, part-cut print paper attack is an easy-of-use yet challenging fraud approach, which can imitate eyes-blick and mouth-open actions that are commonly used as living clues in face recognition system. Besides, the wide range of materials of print papers makes the task even harder. Existing approaches neglect to decouple the structure features from paper types. Though attack images which are similar to the training data can be detected, the accuracy drops dramatically when tested on unseen paper types. However, it's impossible to collect a face dataset covering all types of paper materials with sufficient identities. To alleviate these issues, we propose a Patch-Swap module, which generates synthetic images simulating part-cut print paper attacks. We randomly take two images from CelebA-HQ, crop the patches of eyes and mouth and swap the above patches respectively. With no extra images collected and annotated, the whole process is efficient and effective. We then train a resnet-based model PSRes with our synthetic data. To prove the robustness and effectiveness of our approach, we conduct several experiments on public datasets Rose and CelebA-Spoof, the results show that our PSRes outperforms the existing methods. Besides, our approach show better generalization capability even tested on unseen materials.\",\"PeriodicalId\":314726,\"journal\":{\"name\":\"2023 IEEE Region 10 Symposium (TENSYMP)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Region 10 Symposium (TENSYMP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TENSYMP55890.2023.10223630\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Region 10 Symposium (TENSYMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TENSYMP55890.2023.10223630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人脸识别系统近年来得到了广泛的应用,但它仍然容易受到各种攻击,从2D表示攻击(PA)到3D面具攻击。其中,部分切纸攻击是一种简单易行但具有挑战性的欺诈方法,它可以模仿人脸识别系统中常用的活线索——眼睛和嘴巴张开的动作。此外,印刷纸的材料种类繁多,使得这项任务更加困难。现有的方法忽略了将结构特征与纸张类型解耦。虽然可以检测到与训练数据相似的攻击图像,但在未见过的纸张类型上进行测试时,准确率急剧下降。然而,不可能收集到一个涵盖所有类型的具有足够身份的纸张材料的人脸数据集。为了缓解这些问题,我们提出了一个Patch-Swap模块,它生成模拟部分切割打印纸攻击的合成图像。我们从CelebA-HQ随机取两张图像,裁剪眼睛和嘴巴的斑块,并分别交换上述斑块。没有额外的图像收集和注释,整个过程是高效和有效的。然后,我们用我们的合成数据训练一个基于resnet的PSRes模型。为了证明我们方法的鲁棒性和有效性,我们在公共数据集Rose和CelebA-Spoof上进行了多次实验,结果表明我们的PSRes优于现有方法。此外,我们的方法在未知材料上也显示出更好的泛化能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Patch-Swap Based Approach for Face Anti-Spoofing Enhancement
Face Recognition system is widely used in recent years, however it is still vulnerable to various attacks, ranging from 2D presentation attacks(PA) to 3D masks attacks. Among them, part-cut print paper attack is an easy-of-use yet challenging fraud approach, which can imitate eyes-blick and mouth-open actions that are commonly used as living clues in face recognition system. Besides, the wide range of materials of print papers makes the task even harder. Existing approaches neglect to decouple the structure features from paper types. Though attack images which are similar to the training data can be detected, the accuracy drops dramatically when tested on unseen paper types. However, it's impossible to collect a face dataset covering all types of paper materials with sufficient identities. To alleviate these issues, we propose a Patch-Swap module, which generates synthetic images simulating part-cut print paper attacks. We randomly take two images from CelebA-HQ, crop the patches of eyes and mouth and swap the above patches respectively. With no extra images collected and annotated, the whole process is efficient and effective. We then train a resnet-based model PSRes with our synthetic data. To prove the robustness and effectiveness of our approach, we conduct several experiments on public datasets Rose and CelebA-Spoof, the results show that our PSRes outperforms the existing methods. Besides, our approach show better generalization capability even tested on unseen materials.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信