{"title":"Transferable universal adversarial perturbations against speaker recognition systems","authors":"Xiaochen Liu, Hao Tan, Junjian Zhang, Aiping Li, Zhaoquan Gu","doi":"10.1007/s11280-024-01274-3","DOIUrl":null,"url":null,"abstract":"<p>Deep neural networks (DNN) exhibit powerful feature extraction capabilities, making them highly advantageous in numerous tasks. DNN-based techniques have become widely adopted in the field of speaker recognition. However, imperceptible adversarial perturbations can severely disrupt the decisions made by DNNs. In addition, researchers identified universal adversarial perturbations that can efficiently and significantly attack deep neural networks. In this paper, we propose an algorithm for conducting effective universal adversarial attacks by investigating the dominant features in the speaker recognition task. Through experiments in various scenarios, we find that our perturbations are not only more effective and undetectable but also exhibit a certain degree of transferablity across different datasets and models.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Wide Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11280-024-01274-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks (DNN) exhibit powerful feature extraction capabilities, making them highly advantageous in numerous tasks. DNN-based techniques have become widely adopted in the field of speaker recognition. However, imperceptible adversarial perturbations can severely disrupt the decisions made by DNNs. In addition, researchers identified universal adversarial perturbations that can efficiently and significantly attack deep neural networks. In this paper, we propose an algorithm for conducting effective universal adversarial attacks by investigating the dominant features in the speaker recognition task. Through experiments in various scenarios, we find that our perturbations are not only more effective and undetectable but also exhibit a certain degree of transferablity across different datasets and models.