{"title":"Boosting Deepfake Detection Generalizability via Expansive Learning and Confidence Judgement","authors":"Kuiyuan Zhang;Zeming Hou;Zhongyun Hua;Yifeng Zheng;Leo Yu Zhang","doi":"10.1109/TCSVT.2024.3462985","DOIUrl":null,"url":null,"abstract":"As deepfake technology poses severe threats to information security, significant efforts have been devoted to deepfake detection. To enable model generalization for detecting new types of deepfakes, it is required that the existing models should learn knowledge about new types of deepfakes without losing prior knowledge, a challenge known as catastrophic forgetting (CF). Existing methods mainly utilize domain adaptation to learn about the new deepfakes for addressing this issue. However, these methods are constrained to utilizing a small portion of data samples from the new deepfakes, and they suffer from CF when the size of the data samples used for domain adaptation increases. This resulted in poor average performance in source and target domains. In this paper, we introduce a novel approach to boost the generalizability of deepfake detection. Our approach follows a two-stage training process: training in the source domain (prior deepfakes that have been used for training) and domain adaptation to the target domain (new types of deepfakes). In the first stage, we employ expansive learning to train our expanded model from a well-trained teacher model. In the second stage, we transfer the expanded model to the target domain while removing assistant components. For model architecture, we propose the frequency extraction module to extract frequency features as complementary to spatial features and introduce spatial-frequency contrastive loss to enhance feature learning ability. Moreover, we develop a confidence judgement module to eliminate conflicts between new and prior knowledge. Experimental results demonstrate that our method can achieve better average accuracy in source and target domains even when using large-scale data samples of the target domain, and it exhibits superior generalizability compared to state-of-the-art methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"953-966"},"PeriodicalIF":8.3000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10684474/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
As deepfake technology poses severe threats to information security, significant efforts have been devoted to deepfake detection. To enable model generalization for detecting new types of deepfakes, it is required that the existing models should learn knowledge about new types of deepfakes without losing prior knowledge, a challenge known as catastrophic forgetting (CF). Existing methods mainly utilize domain adaptation to learn about the new deepfakes for addressing this issue. However, these methods are constrained to utilizing a small portion of data samples from the new deepfakes, and they suffer from CF when the size of the data samples used for domain adaptation increases. This resulted in poor average performance in source and target domains. In this paper, we introduce a novel approach to boost the generalizability of deepfake detection. Our approach follows a two-stage training process: training in the source domain (prior deepfakes that have been used for training) and domain adaptation to the target domain (new types of deepfakes). In the first stage, we employ expansive learning to train our expanded model from a well-trained teacher model. In the second stage, we transfer the expanded model to the target domain while removing assistant components. For model architecture, we propose the frequency extraction module to extract frequency features as complementary to spatial features and introduce spatial-frequency contrastive loss to enhance feature learning ability. Moreover, we develop a confidence judgement module to eliminate conflicts between new and prior knowledge. Experimental results demonstrate that our method can achieve better average accuracy in source and target domains even when using large-scale data samples of the target domain, and it exhibits superior generalizability compared to state-of-the-art methods.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.