Xueqi Qiu , Xingyu Miao , Fan Wan , Haoran Duan , Tejal Shah , Varun Ojha , Yang Long , Rajiv Ranjan
{"title":"D2Fusion: Dual-domain fusion with feature superposition for Deepfake detection","authors":"Xueqi Qiu , Xingyu Miao , Fan Wan , Haoran Duan , Tejal Shah , Varun Ojha , Yang Long , Rajiv Ranjan","doi":"10.1016/j.inffus.2025.103087","DOIUrl":null,"url":null,"abstract":"<div><div>Deepfake detection is crucial for curbing the harm it causes to society. However, current Deepfake detection methods fail to thoroughly explore artifact information across different domains due to insufficient intrinsic interactions. These interactions refer to the fusion and coordination after feature extraction processes across different domains, which are crucial for recognizing complex forgery clues. Focusing on more generalized Deepfake detection, in this work, we introduce a novel bi-directional attention module to capture the local positional information of artifact clues from the spatial domain. This enables accurate artifact localization, thus addressing the coarse processing with artifact features. To further address the limitation that the proposed bi-directional attention module may not well capture global subtle forgery information in the artifact feature (e.g., textures or edges), we employ a fine-grained frequency attention module in the frequency domain. By doing so, we can obtain high-frequency information in the fine-grained features, which contains the global and subtle forgery information. Although these features from the diverse domains can be effectively and independently improved, fusing them directly does not effectively improve the detection performance. Therefore, we propose a feature superposition strategy that complements information from spatial and frequency domains. This strategy turns the feature components into the form of wave-like tokens, which are updated based on their phase, such that the distinctions between authentic and artifact features can be amplified. Our method demonstrates significant improvements over state-of-the-art (SOTA) methods on five public Deepfake datasets in capturing abnormalities across different manipulated operations and real-life. Specifically, in intra-dataset evaluations, <span><math><msup><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>Fusion surpasses the baseline accuracy by nearly 2.5%. In cross-manipulation evaluations, it exceeds the baseline AUC by up to 6.15%. In multi-source manipulation evaluations, it exceeds the SOTA methods by up to 14.62% in <span><math><mi>P</mi></math></span>-value, 10.26% in <span><math><mrow><mi>F</mi><mn>1</mn></mrow></math></span>-score and 15.13% in <span><math><mi>R</mi></math></span>-value. In cross-dataset experiments, it exceeds the baseline AUC by up to 6.25%. For potential applications, <span><math><msup><mrow><mi>D</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>Fusion can help improve content moderation on social media and aid forensic investigations by accurately identifying the tampered content.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"120 ","pages":"Article 103087"},"PeriodicalIF":14.7000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525001605","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Deepfake detection is crucial for curbing the harm it causes to society. However, current Deepfake detection methods fail to thoroughly explore artifact information across different domains due to insufficient intrinsic interactions. These interactions refer to the fusion and coordination after feature extraction processes across different domains, which are crucial for recognizing complex forgery clues. Focusing on more generalized Deepfake detection, in this work, we introduce a novel bi-directional attention module to capture the local positional information of artifact clues from the spatial domain. This enables accurate artifact localization, thus addressing the coarse processing with artifact features. To further address the limitation that the proposed bi-directional attention module may not well capture global subtle forgery information in the artifact feature (e.g., textures or edges), we employ a fine-grained frequency attention module in the frequency domain. By doing so, we can obtain high-frequency information in the fine-grained features, which contains the global and subtle forgery information. Although these features from the diverse domains can be effectively and independently improved, fusing them directly does not effectively improve the detection performance. Therefore, we propose a feature superposition strategy that complements information from spatial and frequency domains. This strategy turns the feature components into the form of wave-like tokens, which are updated based on their phase, such that the distinctions between authentic and artifact features can be amplified. Our method demonstrates significant improvements over state-of-the-art (SOTA) methods on five public Deepfake datasets in capturing abnormalities across different manipulated operations and real-life. Specifically, in intra-dataset evaluations, Fusion surpasses the baseline accuracy by nearly 2.5%. In cross-manipulation evaluations, it exceeds the baseline AUC by up to 6.15%. In multi-source manipulation evaluations, it exceeds the SOTA methods by up to 14.62% in -value, 10.26% in -score and 15.13% in -value. In cross-dataset experiments, it exceeds the baseline AUC by up to 6.25%. For potential applications, Fusion can help improve content moderation on social media and aid forensic investigations by accurately identifying the tampered content.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.