D-CAPTCHA++: A Study of Resilience of Deepfake CAPTCHA under Transferable Imperceptible Adversarial Attack

Hong-Hanh Nguyen-Le, Van-Tuan Tran, Dinh-Thuc Nguyen, Nhien-An Le-Khac
{"title":"D-CAPTCHA++: A Study of Resilience of Deepfake CAPTCHA under Transferable Imperceptible Adversarial Attack","authors":"Hong-Hanh Nguyen-Le, Van-Tuan Tran, Dinh-Thuc Nguyen, Nhien-An Le-Khac","doi":"arxiv-2409.07390","DOIUrl":null,"url":null,"abstract":"The advancements in generative AI have enabled the improvement of audio\nsynthesis models, including text-to-speech and voice conversion. This raises\nconcerns about its potential misuse in social manipulation and political\ninterference, as synthetic speech has become indistinguishable from natural\nhuman speech. Several speech-generation programs are utilized for malicious\npurposes, especially impersonating individuals through phone calls. Therefore,\ndetecting fake audio is crucial to maintain social security and safeguard the\nintegrity of information. Recent research has proposed a D-CAPTCHA system based\non the challenge-response protocol to differentiate fake phone calls from real\nones. In this work, we study the resilience of this system and introduce a more\nrobust version, D-CAPTCHA++, to defend against fake calls. Specifically, we\nfirst expose the vulnerability of the D-CAPTCHA system under transferable\nimperceptible adversarial attack. Secondly, we mitigate such vulnerability by\nimproving the robustness of the system by using adversarial training in\nD-CAPTCHA deepfake detectors and task classifiers.","PeriodicalId":501284,"journal":{"name":"arXiv - EE - Audio and Speech Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Audio and Speech Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07390","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The advancements in generative AI have enabled the improvement of audio synthesis models, including text-to-speech and voice conversion. This raises concerns about its potential misuse in social manipulation and political interference, as synthetic speech has become indistinguishable from natural human speech. Several speech-generation programs are utilized for malicious purposes, especially impersonating individuals through phone calls. Therefore, detecting fake audio is crucial to maintain social security and safeguard the integrity of information. Recent research has proposed a D-CAPTCHA system based on the challenge-response protocol to differentiate fake phone calls from real ones. In this work, we study the resilience of this system and introduce a more robust version, D-CAPTCHA++, to defend against fake calls. Specifically, we first expose the vulnerability of the D-CAPTCHA system under transferable imperceptible adversarial attack. Secondly, we mitigate such vulnerability by improving the robustness of the system by using adversarial training in D-CAPTCHA deepfake detectors and task classifiers.
D-CAPTCHA++:Deepfake 验证码在可转移不可感知对抗性攻击下的复原力研究
人工智能生成技术的进步使声音合成模型得以改进,包括文本到语音和语音转换。这引发了人们对其可能被滥用于社会操纵和政治干预的担忧,因为合成语音已经无法与自然人的语音相区分。一些语音生成程序被用于恶意目的,特别是通过电话冒充个人。因此,检测虚假音频对于维护社会安全和信息完整性至关重要。最近的研究提出了一种基于挑战-响应协议的 D-CAPTCHA 系统,用于区分虚假电话和真实电话。在这项工作中,我们研究了该系统的弹性,并推出了一个更稳健的版本--D-CAPTCHA++,以抵御虚假电话。具体来说,我们首先揭示了 D-CAPTCHA 系统在可转移、可感知的对抗性攻击下的脆弱性。其次,我们通过在 D-CAPTCHA 深度假冒检测器和任务分类器中使用对抗训练来提高系统的鲁棒性,从而缓解这种脆弱性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信