{"title":"Leveraging Pixel Difference Feature for Deepfake Detection","authors":"Maoyu Mao;Chungang Yan;Junli Wang;Jun Yang","doi":"10.1109/TETCI.2025.3548803","DOIUrl":null,"url":null,"abstract":"The rise of Deepfake technology poses a formidable threat to the credibility of both judicial evidence and intellectual property safeguards. Current methods lack the ability to integrate the texture information of facial features into CNNs, despite the fact that fake contents are subtle and pixel-level. Due to the fixed grid kernel structure, CNNs are limited in their ability to describe detailed fine-grained information, making it challenging to achieve accurate image detection through pixel-level fine-grained features. To mitigate this problem, we propose a Pixel Difference Convolution (PDC) to capture local intrinsic detailed patterns via aggregating both intensity and gradient information. To avoid the redundant feature computations generated by PDC and explicitly enhance the representational power of a standard convolutional kernel, we separate PDC into vertical/horizontal and diagonal parts. Furthermore, we propose an Ensemble Dilated Convolution (EDC) to explore long-range contextual dependencies and further boost performance. We introduce a novel network, Pixel Difference Convolutional Network (PDCNet), which is built with PDC and EDC to expose Deepfake by capturing faint traces of tampering hidden in portrait images. By leveraging PDC and EDC in the information propagation process, PDCNet seamlessly incorporates both local and global pixel differences. Comprehensive experiments are performed on three databases, FF++, Celeb-DF, and DFDC to confirm that our PDCNet outperforms existing approaches. Our approach achieves accuracies of 0.9634, 0.9614, and 0.8819 in FF++, Celeb-DF, and DFDC, respectively.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"3178-3188"},"PeriodicalIF":5.3000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Emerging Topics in Computational Intelligence","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10937061/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The rise of Deepfake technology poses a formidable threat to the credibility of both judicial evidence and intellectual property safeguards. Current methods lack the ability to integrate the texture information of facial features into CNNs, despite the fact that fake contents are subtle and pixel-level. Due to the fixed grid kernel structure, CNNs are limited in their ability to describe detailed fine-grained information, making it challenging to achieve accurate image detection through pixel-level fine-grained features. To mitigate this problem, we propose a Pixel Difference Convolution (PDC) to capture local intrinsic detailed patterns via aggregating both intensity and gradient information. To avoid the redundant feature computations generated by PDC and explicitly enhance the representational power of a standard convolutional kernel, we separate PDC into vertical/horizontal and diagonal parts. Furthermore, we propose an Ensemble Dilated Convolution (EDC) to explore long-range contextual dependencies and further boost performance. We introduce a novel network, Pixel Difference Convolutional Network (PDCNet), which is built with PDC and EDC to expose Deepfake by capturing faint traces of tampering hidden in portrait images. By leveraging PDC and EDC in the information propagation process, PDCNet seamlessly incorporates both local and global pixel differences. Comprehensive experiments are performed on three databases, FF++, Celeb-DF, and DFDC to confirm that our PDCNet outperforms existing approaches. Our approach achieves accuracies of 0.9634, 0.9614, and 0.8819 in FF++, Celeb-DF, and DFDC, respectively.
期刊介绍:
The IEEE Transactions on Emerging Topics in Computational Intelligence (TETCI) publishes original articles on emerging aspects of computational intelligence, including theory, applications, and surveys.
TETCI is an electronics only publication. TETCI publishes six issues per year.
Authors are encouraged to submit manuscripts in any emerging topic in computational intelligence, especially nature-inspired computing topics not covered by other IEEE Computational Intelligence Society journals. A few such illustrative examples are glial cell networks, computational neuroscience, Brain Computer Interface, ambient intelligence, non-fuzzy computing with words, artificial life, cultural learning, artificial endocrine networks, social reasoning, artificial hormone networks, computational intelligence for the IoT and Smart-X technologies.