Black-Box Adversarial Attacks against Audio Forensics Models

Yi Jiang, Dengpan Ye
{"title":"Black-Box Adversarial Attacks against Audio Forensics Models","authors":"Yi Jiang, Dengpan Ye","doi":"10.1155/2022/6410478","DOIUrl":null,"url":null,"abstract":"Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a \n \n 99\n %\n \n attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and \n \n 60\n %\n \n attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to \n \n 16\n %\n \n and guarantees \n \n 98\n %\n \n detection accuracy of forensics models.","PeriodicalId":167643,"journal":{"name":"Secur. Commun. Networks","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Secur. Commun. Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2022/6410478","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a 99 % attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and 60 % attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to 16 % and guarantees 98 % detection accuracy of forensics models.
针对音频取证模型的黑盒对抗性攻击
语音合成技术近年来取得了很大的进步,在物联网中得到了广泛的应用,但同时也带来了被不法分子滥用的风险。因此,为了减少或消除这些负面影响,出现了一系列音频取证模型研究。在本文中,我们提出了一种仅依赖于音频取证模型输出分数的黑盒对抗攻击方法。为了提高对抗性攻击的可转移性,我们采用了集成模型方法。鉴于对抗性样本对音频取证模型的巨大威胁,本文还设计了一种防御方法。我们在ASVspoof 2019数据集的LA部分训练的4个取证模型上的实验结果表明,我们的攻击在只有分数的黑盒模型上可以获得99%的攻击成功率,这与最好的白盒攻击具有竞争力,在只有决策的黑盒模型上可以获得60%的攻击成功率。最后,我们的防御方法将攻击成功率降低到16%,并保证取证模型的检测准确率达到98%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信