{"title":"Shaking the Fake: Detecting Deepfake Videos in Real Time via Active Probes","authors":"Zhixin Xie, Jun Luo","doi":"arxiv-2409.10889","DOIUrl":null,"url":null,"abstract":"Real-time deepfake, a type of generative AI, is capable of \"creating\"\nnon-existing contents (e.g., swapping one's face with another) in a video. It\nhas been, very unfortunately, misused to produce deepfake videos (during web\nconferences, video calls, and identity authentication) for malicious purposes,\nincluding financial scams and political misinformation. Deepfake detection, as\nthe countermeasure against deepfake, has attracted considerable attention from\nthe academic community, yet existing works typically rely on learning passive\nfeatures that may perform poorly beyond seen datasets. In this paper, we\npropose SFake, a new real-time deepfake detection method that innovatively\nexploits deepfake models' inability to adapt to physical interference.\nSpecifically, SFake actively sends probes to trigger mechanical vibrations on\nthe smartphone, resulting in the controllable feature on the footage.\nConsequently, SFake determines whether the face is swapped by deepfake based on\nthe consistency of the facial area with the probe pattern. We implement SFake,\nevaluate its effectiveness on a self-built dataset, and compare it with six\nother detection methods. The results show that SFake outperforms other\ndetection methods with higher detection accuracy, faster process speed, and\nlower memory consumption.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"8 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10889","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Real-time deepfake, a type of generative AI, is capable of "creating"
non-existing contents (e.g., swapping one's face with another) in a video. It
has been, very unfortunately, misused to produce deepfake videos (during web
conferences, video calls, and identity authentication) for malicious purposes,
including financial scams and political misinformation. Deepfake detection, as
the countermeasure against deepfake, has attracted considerable attention from
the academic community, yet existing works typically rely on learning passive
features that may perform poorly beyond seen datasets. In this paper, we
propose SFake, a new real-time deepfake detection method that innovatively
exploits deepfake models' inability to adapt to physical interference.
Specifically, SFake actively sends probes to trigger mechanical vibrations on
the smartphone, resulting in the controllable feature on the footage.
Consequently, SFake determines whether the face is swapped by deepfake based on
the consistency of the facial area with the probe pattern. We implement SFake,
evaluate its effectiveness on a self-built dataset, and compare it with six
other detection methods. The results show that SFake outperforms other
detection methods with higher detection accuracy, faster process speed, and
lower memory consumption.