Deepfakes Examiner: An End-to-End Deep Learning Model for Deepfakes Videos Detection

Hafsa Ilyas, Aun Irtaza, A. Javed, K. Malik
{"title":"Deepfakes Examiner: An End-to-End Deep Learning Model for Deepfakes Videos Detection","authors":"Hafsa Ilyas, Aun Irtaza, A. Javed, K. Malik","doi":"10.1109/ICOSST57195.2022.10016871","DOIUrl":null,"url":null,"abstract":"Deepfakes generation approaches have made it possible even for less technical users to generate fake videos using only the source and target images. Thus, the threats associated with deepfake video generation such as impersonating public figures, defamation, and spreading disinformation on media platforms have increased exponentially. The significant improvement in the deepfakes generation techniques necessitates the development of effective deepfakes detection methods to counter disinformation threats. Existing techniques do not provide reliable deepfakes detection particularly when the videos are generated using different deepfakes generation techniques and contain variations in illumination conditions and diverse ethnicities. Therefore, this paper proposes a novel hybrid deep learning framework, InceptionResNet-BiLSTM, that is robust to different ethnicities and varied illumination conditions, and able to detect deepfake videos generated using different techniques. The proposed InceptionResNet-BiLSTM consists of two components: customized InceptionResNetV2 and Bidirectional Long-Short Term Memory (BiLSTM). In our proposed framework, faces extracted from the videos are fed to our customized InceptionResNetV2 for extracting frame-level learnable features. The sequences of features are then used to train a temporally aware BiLSTM to classify between the real and fake video. We evaluated our proposed approach on the diverse, standard, and largescale FaceForensics++ (FF++) dataset containing videos manipulated using different techniques (i.e., DeepFakes, FaceSwap, Face2Face, FaceShifter, and NeuralTextures) and the FakeA VCeleb dataset. Our method achieved an accuracy greater than 90% on DeepFakes, FaceSwap, and Face2Face subsets. Performance and generalizability evaluation highlights the effectiveness of our method for detecting deepfake videos generated through different techniques on diverse FF++ and FakeA VCeleb datasets.","PeriodicalId":238082,"journal":{"name":"2022 16th International Conference on Open Source Systems and Technologies (ICOSST)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 16th International Conference on Open Source Systems and Technologies (ICOSST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOSST57195.2022.10016871","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Deepfakes generation approaches have made it possible even for less technical users to generate fake videos using only the source and target images. Thus, the threats associated with deepfake video generation such as impersonating public figures, defamation, and spreading disinformation on media platforms have increased exponentially. The significant improvement in the deepfakes generation techniques necessitates the development of effective deepfakes detection methods to counter disinformation threats. Existing techniques do not provide reliable deepfakes detection particularly when the videos are generated using different deepfakes generation techniques and contain variations in illumination conditions and diverse ethnicities. Therefore, this paper proposes a novel hybrid deep learning framework, InceptionResNet-BiLSTM, that is robust to different ethnicities and varied illumination conditions, and able to detect deepfake videos generated using different techniques. The proposed InceptionResNet-BiLSTM consists of two components: customized InceptionResNetV2 and Bidirectional Long-Short Term Memory (BiLSTM). In our proposed framework, faces extracted from the videos are fed to our customized InceptionResNetV2 for extracting frame-level learnable features. The sequences of features are then used to train a temporally aware BiLSTM to classify between the real and fake video. We evaluated our proposed approach on the diverse, standard, and largescale FaceForensics++ (FF++) dataset containing videos manipulated using different techniques (i.e., DeepFakes, FaceSwap, Face2Face, FaceShifter, and NeuralTextures) and the FakeA VCeleb dataset. Our method achieved an accuracy greater than 90% on DeepFakes, FaceSwap, and Face2Face subsets. Performance and generalizability evaluation highlights the effectiveness of our method for detecting deepfake videos generated through different techniques on diverse FF++ and FakeA VCeleb datasets.
Deepfakes考官:用于Deepfakes视频检测的端到端深度学习模型
深度伪造生成方法使得即使是技术水平较低的用户也可以仅使用源图像和目标图像生成假视频。因此,与深度假视频生成相关的威胁,如冒充公众人物、诽谤和在媒体平台上传播虚假信息,呈指数级增长。深度伪造生成技术的显著改进需要开发有效的深度伪造检测方法来对抗虚假信息威胁。现有的技术不能提供可靠的深度伪造检测,特别是当视频是使用不同的深度伪造生成技术生成的,并且包含光照条件和不同种族的变化。因此,本文提出了一种新的混合深度学习框架,即inception - resnet - bilstm,该框架对不同种族和不同光照条件具有鲁棒性,并且能够检测使用不同技术生成的深度假视频。提出的InceptionResNet-BiLSTM由定制的InceptionResNetV2和双向长短期记忆(Bidirectional Long-Short Term Memory, BiLSTM)两部分组成。在我们提出的框架中,从视频中提取的人脸被馈送到我们定制的InceptionResNetV2中,以提取帧级可学习的特征。然后,这些特征序列被用来训练一个时间感知的BiLSTM来对真实视频和虚假视频进行分类。我们在不同的、标准的、大规模的FaceForensics++ (FF++)数据集上评估了我们提出的方法,该数据集包含使用不同技术(即DeepFakes、FaceSwap、Face2Face、FaceShifter和NeuralTextures)操纵的视频,以及FakeA VCeleb数据集。我们的方法在DeepFakes、FaceSwap和Face2Face子集上实现了超过90%的准确率。性能和泛化性评估突出了我们的方法在不同的FF++和FakeA VCeleb数据集上检测通过不同技术生成的深度假视频的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信