Deepfake CAPTCHA: A Method for Preventing Fake Calls

Lior Yasur, Guy Frankovits, Fred M. Grabovski, Yisroel Mirsky
{"title":"Deepfake CAPTCHA: A Method for Preventing Fake Calls","authors":"Lior Yasur, Guy Frankovits, Fred M. Grabovski, Yisroel Mirsky","doi":"10.1145/3579856.3595801","DOIUrl":null,"url":null,"abstract":"Deep learning technology has made it possible to generate realistic content of specific individuals. These ‘deepfakes’ can now be generated in real-time which enables attackers to impersonate people over audio and video calls. Moreover, some methods only need a few images or seconds of audio to steal an identity. Existing defenses perform passive analysis to detect fake content. However, with the rapid progress of deepfake quality, this may be a losing game. In this paper, we propose D-CAPTCHA: an active defense against real-time deepfakes. The approach is to force the adversary into the spotlight by challenging the deepfake model to generate content which exceeds its capabilities. By doing so, passive detection becomes easier since the content will be distorted. In contrast to existing CAPTCHAs, we challenge the AI’s ability to create content as opposed to its ability to classify content. In this work we focus on real-time audio deepfakes and present preliminary results on video. In our evaluation we found that D-CAPTCHA outperforms state-of-the-art audio deepfake detectors with an accuracy of 91-100% depending on the challenge (compared to 71% without challenges). We also performed a study on 41 volunteers to understand how threatening current real-time deepfake attacks are. We found that the majority of the volunteers could not tell the difference between real and fake audio.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3579856.3595801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Deep learning technology has made it possible to generate realistic content of specific individuals. These ‘deepfakes’ can now be generated in real-time which enables attackers to impersonate people over audio and video calls. Moreover, some methods only need a few images or seconds of audio to steal an identity. Existing defenses perform passive analysis to detect fake content. However, with the rapid progress of deepfake quality, this may be a losing game. In this paper, we propose D-CAPTCHA: an active defense against real-time deepfakes. The approach is to force the adversary into the spotlight by challenging the deepfake model to generate content which exceeds its capabilities. By doing so, passive detection becomes easier since the content will be distorted. In contrast to existing CAPTCHAs, we challenge the AI’s ability to create content as opposed to its ability to classify content. In this work we focus on real-time audio deepfakes and present preliminary results on video. In our evaluation we found that D-CAPTCHA outperforms state-of-the-art audio deepfake detectors with an accuracy of 91-100% depending on the challenge (compared to 71% without challenges). We also performed a study on 41 volunteers to understand how threatening current real-time deepfake attacks are. We found that the majority of the volunteers could not tell the difference between real and fake audio.
Deepfake CAPTCHA:一种防止虚假呼叫的方法
深度学习技术使生成特定个人的真实内容成为可能。这些“深度伪造”现在可以实时生成,使攻击者能够通过音频和视频通话冒充人。此外,有些方法只需要几张图片或几秒钟的音频就可以窃取身份。现有的防御系统执行被动分析来检测虚假内容。然而,随着deepfake质量的快速进步,这可能是一场失败的游戏。在本文中,我们提出了D-CAPTCHA:一种针对实时深度伪造的主动防御。这种方法是通过挑战deepfake模型来生成超出其能力的内容,从而迫使对手成为焦点。通过这样做,被动检测变得更容易,因为内容将被扭曲。与现有的验证码相比,我们挑战人工智能创建内容的能力,而不是对内容进行分类的能力。在这项工作中,我们专注于实时音频深度伪造,并提出了视频的初步结果。在我们的评估中,我们发现D-CAPTCHA优于最先进的音频深度伪造检测器,准确率为91-100%,具体取决于挑战(相比之下,没有挑战的准确率为71%)。我们还对41名志愿者进行了一项研究,以了解当前实时深度伪造攻击的威胁程度。我们发现,大多数志愿者无法分辨真实和虚假的音频。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信