Cross-Domain Audio Deepfake Detection: Dataset and Analysis

Yuang Li, Min Zhang, Mengxin Ren, Miaomiao Ma, Daimeng Wei, Hao Yang
{"title":"Cross-Domain Audio Deepfake Detection: Dataset and Analysis","authors":"Yuang Li, Min Zhang, Mengxin Ren, Miaomiao Ma, Daimeng Wei, Hao Yang","doi":"arxiv-2404.04904","DOIUrl":null,"url":null,"abstract":"Audio deepfake detection (ADD) is essential for preventing the misuse of\nsynthetic voices that may infringe on personal rights and privacy. Recent\nzero-shot text-to-speech (TTS) models pose higher risks as they can clone\nvoices with a single utterance. However, the existing ADD datasets are\noutdated, leading to suboptimal generalization of detection models. In this\npaper, we construct a new cross-domain ADD dataset comprising over 300 hours of\nspeech data that is generated by five advanced zero-shot TTS models. To\nsimulate real-world scenarios, we employ diverse attack methods and audio\nprompts from different datasets. Experiments show that, through novel\nattack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve\nequal error rates of 4.1\\% and 6.5\\% respectively. Additionally, we demonstrate\nour models' outstanding few-shot ADD ability by fine-tuning with just one\nminute of target-domain data. Nonetheless, neural codec compressors greatly\naffect the detection accuracy, necessitating further research.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":"65 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2404.04904","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Audio deepfake detection (ADD) is essential for preventing the misuse of synthetic voices that may infringe on personal rights and privacy. Recent zero-shot text-to-speech (TTS) models pose higher risks as they can clone voices with a single utterance. However, the existing ADD datasets are outdated, leading to suboptimal generalization of detection models. In this paper, we construct a new cross-domain ADD dataset comprising over 300 hours of speech data that is generated by five advanced zero-shot TTS models. To simulate real-world scenarios, we employ diverse attack methods and audio prompts from different datasets. Experiments show that, through novel attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve equal error rates of 4.1\% and 6.5\% respectively. Additionally, we demonstrate our models' outstanding few-shot ADD ability by fine-tuning with just one minute of target-domain data. Nonetheless, neural codec compressors greatly affect the detection accuracy, necessitating further research.
跨域音频深度伪造检测:数据集与分析
音频深度伪造检测(ADD)对于防止滥用合成声音(可能会侵犯个人权利和隐私)至关重要。最近拍摄的文本到语音(TTS)模型会带来更高的风险,因为它们可以通过单个语音克隆声音。然而,现有的 ADD 数据集已经过时,导致检测模型的泛化效果不理想。在本文中,我们构建了一个新的跨领域 ADD 数据集,该数据集由五个先进的零镜头 TTS 模型生成,包含超过 300 小时的语音数据。为了模拟真实场景,我们采用了不同的攻击方法和来自不同数据集的音频选段。实验表明,通过新颖的攻击增强训练,Wav2Vec2-large 和 Whisper-medium 模型的错误率分别为 4.1\% 和 6.5\% 。此外,我们仅用一分钟的目标域数据进行微调,就证明了我们的模型具有出色的少量 ADD 能力。然而,神经编解码器压缩器会极大地影响检测精度,因此有必要开展进一步的研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信