利用 5 MB 模型参数,通过基于头部感知变换器的知识提炼实现高效人脸防欺骗

IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jun Zhang , Yunfei Zhang , Feixue Shao , Xuetao Ma , Shu Feng , Yongfei Wu , Daoxiang Zhou
{"title":"利用 5 MB 模型参数,通过基于头部感知变换器的知识提炼实现高效人脸防欺骗","authors":"Jun Zhang ,&nbsp;Yunfei Zhang ,&nbsp;Feixue Shao ,&nbsp;Xuetao Ma ,&nbsp;Shu Feng ,&nbsp;Yongfei Wu ,&nbsp;Daoxiang Zhou","doi":"10.1016/j.asoc.2024.112237","DOIUrl":null,"url":null,"abstract":"<div><p>Although face recognition technology has been applied in many scenarios, it still suffers from many types of presentation attacks, so face anti-spoofing (FAS) becomes a hot topic in computer vision. Recently, vision transformer is recognized as the mainstream architecture for FAS, which always relies on auxiliary information, sophisticated tricks and huge model parameters. Considering that face based identity authentication usually takes place on mobile-like devices, therefore how to design an effective and lightweight model is of great significance. Inspired by the powerful global modeling ability of self-attention and the model compression ability of knowledge distillation, a simple yet effective knowledge distillation approach is proposed for FAS under transformer framework. Our primary idea is to leverage the rich knowledge of a teacher network pre-trained on large-scale face data to guide the learning of a lightweight student network. The main contributions of our method are threefold: (1) Feature- and logits-level distillation are combined to transfer the rich knowledge of teacher to student. (2) A head-aware strategy is proposed to deal with the dimension mismatching issue of middle encoder layers between teacher and student networks, in which a novel attention head correlation matrix is introduced. (3) Our method can bridge the performance gap between teacher and student, and the resulting student network is extremely lightweight with only 5 MB parameters. Extensive experiments are conducted on three public face-spoofing datasets, CASIA-FASD, Replay-Attack and OULU-NPU, the results demonstrate that our method can obtain performance on par with or superior to most FAS methods and outperform many knowledge distillation methods. Meanwhile, the distilled student network achieves excellent performance with 17<span><math><mo>×</mo></math></span> fewer parameters and 9<span><math><mo>×</mo></math></span> faster inference time compared to the teacher network. The code will be publicly available at <span><span>https://github.com/Maricle-zhangjun/HaTFAS</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Efficient face anti-spoofing via head-aware transformer based knowledge distillation with 5 MB model parameters\",\"authors\":\"Jun Zhang ,&nbsp;Yunfei Zhang ,&nbsp;Feixue Shao ,&nbsp;Xuetao Ma ,&nbsp;Shu Feng ,&nbsp;Yongfei Wu ,&nbsp;Daoxiang Zhou\",\"doi\":\"10.1016/j.asoc.2024.112237\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Although face recognition technology has been applied in many scenarios, it still suffers from many types of presentation attacks, so face anti-spoofing (FAS) becomes a hot topic in computer vision. Recently, vision transformer is recognized as the mainstream architecture for FAS, which always relies on auxiliary information, sophisticated tricks and huge model parameters. Considering that face based identity authentication usually takes place on mobile-like devices, therefore how to design an effective and lightweight model is of great significance. Inspired by the powerful global modeling ability of self-attention and the model compression ability of knowledge distillation, a simple yet effective knowledge distillation approach is proposed for FAS under transformer framework. Our primary idea is to leverage the rich knowledge of a teacher network pre-trained on large-scale face data to guide the learning of a lightweight student network. The main contributions of our method are threefold: (1) Feature- and logits-level distillation are combined to transfer the rich knowledge of teacher to student. (2) A head-aware strategy is proposed to deal with the dimension mismatching issue of middle encoder layers between teacher and student networks, in which a novel attention head correlation matrix is introduced. (3) Our method can bridge the performance gap between teacher and student, and the resulting student network is extremely lightweight with only 5 MB parameters. Extensive experiments are conducted on three public face-spoofing datasets, CASIA-FASD, Replay-Attack and OULU-NPU, the results demonstrate that our method can obtain performance on par with or superior to most FAS methods and outperform many knowledge distillation methods. Meanwhile, the distilled student network achieves excellent performance with 17<span><math><mo>×</mo></math></span> fewer parameters and 9<span><math><mo>×</mo></math></span> faster inference time compared to the teacher network. The code will be publicly available at <span><span>https://github.com/Maricle-zhangjun/HaTFAS</span><svg><path></path></svg></span>.</p></div>\",\"PeriodicalId\":50737,\"journal\":{\"name\":\"Applied Soft Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Soft Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1568494624010111\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494624010111","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

尽管人脸识别技术已被应用于多种场景,但它仍然受到多种类型的呈现攻击,因此人脸防欺骗(FAS)成为计算机视觉领域的热门话题。近年来,视觉变换器被认为是人脸识别系统的主流架构,它总是依赖于辅助信息、复杂的技巧和庞大的模型参数。考虑到基于人脸的身份验证通常是在类移动设备上进行的,因此如何设计一种有效且轻量级的模型具有重要意义。受自我关注强大的全局建模能力和知识提炼的模型压缩能力的启发,我们在转换器框架下为 FAS 提出了一种简单而有效的知识提炼方法。我们的主要想法是利用在大规模人脸数据上预先训练好的教师网络的丰富知识来指导轻量级学生网络的学习。我们的方法主要有三方面的贡献:(1)将特征级和对数级蒸馏结合起来,将教师的丰富知识传授给学生。(2) 提出了一种头部感知策略,以解决教师和学生网络中间编码器层的维度不匹配问题,其中引入了一种新颖的注意力头部相关矩阵。(3) 我们的方法可以缩小教师和学生之间的性能差距,生成的学生网络非常轻量级,参数只有 5 MB。我们在 CASIA-FASD、Replay-Attack 和 OULU-NPU 三个公开的人脸欺骗数据集上进行了广泛的实验,结果表明我们的方法可以获得与大多数 FAS 方法相当或更高的性能,并优于许多知识提炼方法。同时,与教师网络相比,经过提炼的学生网络在参数数量减少 17 倍、推理时间缩短 9 倍的情况下取得了优异的性能。代码将在 https://github.com/Maricle-zhangjun/HaTFAS 上公开。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficient face anti-spoofing via head-aware transformer based knowledge distillation with 5 MB model parameters

Although face recognition technology has been applied in many scenarios, it still suffers from many types of presentation attacks, so face anti-spoofing (FAS) becomes a hot topic in computer vision. Recently, vision transformer is recognized as the mainstream architecture for FAS, which always relies on auxiliary information, sophisticated tricks and huge model parameters. Considering that face based identity authentication usually takes place on mobile-like devices, therefore how to design an effective and lightweight model is of great significance. Inspired by the powerful global modeling ability of self-attention and the model compression ability of knowledge distillation, a simple yet effective knowledge distillation approach is proposed for FAS under transformer framework. Our primary idea is to leverage the rich knowledge of a teacher network pre-trained on large-scale face data to guide the learning of a lightweight student network. The main contributions of our method are threefold: (1) Feature- and logits-level distillation are combined to transfer the rich knowledge of teacher to student. (2) A head-aware strategy is proposed to deal with the dimension mismatching issue of middle encoder layers between teacher and student networks, in which a novel attention head correlation matrix is introduced. (3) Our method can bridge the performance gap between teacher and student, and the resulting student network is extremely lightweight with only 5 MB parameters. Extensive experiments are conducted on three public face-spoofing datasets, CASIA-FASD, Replay-Attack and OULU-NPU, the results demonstrate that our method can obtain performance on par with or superior to most FAS methods and outperform many knowledge distillation methods. Meanwhile, the distilled student network achieves excellent performance with 17× fewer parameters and 9× faster inference time compared to the teacher network. The code will be publicly available at https://github.com/Maricle-zhangjun/HaTFAS.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Applied Soft Computing
Applied Soft Computing 工程技术-计算机:跨学科应用
CiteScore
15.80
自引率
6.90%
发文量
874
审稿时长
10.9 months
期刊介绍: Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities. Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信