{"title":"Black-Box Adversarial Attacks against Audio Forensics Models","authors":"Yi Jiang, Dengpan Ye","doi":"10.1155/2022/6410478","DOIUrl":null,"url":null,"abstract":"Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a \n \n 99\n %\n \n attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and \n \n 60\n %\n \n attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to \n \n 16\n %\n \n and guarantees \n \n 98\n %\n \n detection accuracy of forensics models.","PeriodicalId":167643,"journal":{"name":"Secur. Commun. Networks","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Secur. Commun. Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2022/6410478","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Speech synthesis technology has made great progress in recent years and is widely used in the Internet of things, but it also brings the risk of being abused by criminals. Therefore, a series of researches on audio forensics models have arisen to reduce or eliminate these negative effects. In this paper, we propose a black-box adversarial attack method that only relies on output scores of audio forensics models. To improve the transferability of adversarial attacks, we utilize the ensemble-model method. A defense method is also designed against our proposed attack method under the view of the huge threat of adversarial examples to audio forensics models. Our experimental results on 4 forensics models trained on the LA part of the ASVspoof 2019 dataset show that our attacks can get a
99
%
attack success rate on score-only black-box models, which is competitive to the best of white-box attacks, and
60
%
attack success rate on decision-only black-box models. Finally, our defense method reduces the attack success rate to
16
%
and guarantees
98
%
detection accuracy of forensics models.