中文版 | English
题名

Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing

作者
发表日期
2024
DOI
发表期刊
ISSN
1558-2205
卷号PP期号:99
摘要
Face recognition systems have raised concerns due to their vulnerability to different presentation attacks, and system security has become an increasingly critical concern. Although many face anti-spoofing (FAS) methods perform well in intra-dataset scenarios, their generalization remains a challenge. To address this issue, some methods adopt domain adversarial training (DAT) to extract domain-invariant features. Differently, in this paper, we propose a domain adversarial attack (DAA) method by adding perturbations to the input images, which makes them indistinguishable across domains and enables domain alignment. Moreover, since models trained on limited data and types of attacks cannot generalize well to unknown attacks, we propose a dual perceptual and generative knowledge distillation framework for face anti-spoofing that utilizes pre-trained face-related models containing rich face priors. Specifically, we adopt two different face-related models as teachers to transfer knowledge to the target student model. The pre-trained teacher models are not from the task of face anti-spoofing but from perceptual and generative tasks, respectively, which implicitly augment the data. By combining both DAA and dual-teacher knowledge distillation, we develop a dual teacher knowledge distillation with domain alignment framework (DTDA) for face anti-spoofing. The advantage of our proposed method has been verified through extensive ablation studies and comparison with state-of-the-art methods on public datasets across multiple protocols.
相关链接[IEEE记录]
学校署名
其他
引用统计
成果类型期刊论文
条目标识符http://sustech.caswiz.com/handle/2SGJ60CL/828622
专题工学院_电子与电气工程系
作者单位
1.School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University, Shenzhen, China
2.Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, China
3.College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
4.State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
5.Harbin Institute of Technology, Shenzhen, China
6.Guangxi Medical University, Guangxi, China
7.Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China
8.Shenzhen Campus of Sun Yat-sen University, Shenzhen, China
推荐引用方式
GB/T 7714
Zhe Kong,Wentian Zhang,Tao Wang,et al. Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing[J]. IEEE Transactions on Circuits and Systems for Video Technology,2024,PP(99).
APA
Zhe Kong.,Wentian Zhang.,Tao Wang.,Kaihao Zhang.,Yuexiang Li.,...&Wenhan Luo.(2024).Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing.IEEE Transactions on Circuits and Systems for Video Technology,PP(99).
MLA
Zhe Kong,et al."Dual Teacher Knowledge Distillation with Domain Alignment for Face Anti-spoofing".IEEE Transactions on Circuits and Systems for Video Technology PP.99(2024).
条目包含的文件
条目无相关文件。
个性服务
原文链接
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
导出为Excel格式
导出为Csv格式
Altmetrics Score
谷歌学术
谷歌学术中相似的文章
[Zhe Kong]的文章
[Wentian Zhang]的文章
[Tao Wang]的文章
百度学术
百度学术中相似的文章
[Zhe Kong]的文章
[Wentian Zhang]的文章
[Tao Wang]的文章
必应学术
必应学术中相似的文章
[Zhe Kong]的文章
[Wentian Zhang]的文章
[Tao Wang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
[发表评论/异议/意见]
暂无评论

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。