A robust adversarial attack against speech recognition with UAP

IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Ziheng Qin , Xianglong Zhang , Shujun Li
{"title":"A robust adversarial attack against speech recognition with UAP","authors":"Ziheng Qin ,&nbsp;Xianglong Zhang ,&nbsp;Shujun Li","doi":"10.1016/j.hcc.2022.100098","DOIUrl":null,"url":null,"abstract":"<div><p>Speech recognition (SR) systems based on deep neural networks are increasingly widespread in smart devices. However, they are vulnerable to human-imperceptible adversarial attacks, which cause the SR to generate incorrect or targeted adversarial commands. Meanwhile, audio adversarial attacks are particularly susceptible to various factors, e.g., ambient noise, after applying them to a real-world attack. To circumvent this issue, we develop a universal adversarial perturbation (UAP) generation method to construct robust real-world UAP by integrating ambient noise into the generation process. The proposed UAP can work well in the case of input-agnostic and independent sources. We validate the effectiveness of our method on two different SRs in different real-world scenarios and parameters, the results demonstrate that our method yields state-of-the-art performance, i.e. given any audio waveform, the word error rate can be up to 80%. Extensive experiments investigate the impact of different parameters (e.g, signal-to-noise ratio, distance, and attack angle) on the attack success rate.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"3 1","pages":"Article 100098"},"PeriodicalIF":3.2000,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667295222000502","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Speech recognition (SR) systems based on deep neural networks are increasingly widespread in smart devices. However, they are vulnerable to human-imperceptible adversarial attacks, which cause the SR to generate incorrect or targeted adversarial commands. Meanwhile, audio adversarial attacks are particularly susceptible to various factors, e.g., ambient noise, after applying them to a real-world attack. To circumvent this issue, we develop a universal adversarial perturbation (UAP) generation method to construct robust real-world UAP by integrating ambient noise into the generation process. The proposed UAP can work well in the case of input-agnostic and independent sources. We validate the effectiveness of our method on two different SRs in different real-world scenarios and parameters, the results demonstrate that our method yields state-of-the-art performance, i.e. given any audio waveform, the word error rate can be up to 80%. Extensive experiments investigate the impact of different parameters (e.g, signal-to-noise ratio, distance, and attack angle) on the attack success rate.

Abstract Image

一种针对UAP语音识别的鲁棒对抗性攻击
基于深度神经网络的语音识别系统在智能设备中越来越广泛。然而,它们很容易受到人类无法察觉的对抗性攻击,这会导致SR生成不正确或有针对性的对抗性命令。同时,音频对抗性攻击在应用于真实世界的攻击后,特别容易受到各种因素的影响,例如环境噪声。为了解决这个问题,我们开发了一种通用对抗性扰动(UAP)生成方法,通过将环境噪声集成到生成过程中来构建健壮的真实世界UAP。所提出的UAP在输入不可知和独立来源的情况下可以很好地工作。我们在不同的真实世界场景和参数中验证了我们的方法在两个不同SR上的有效性,结果表明,我们的方法产生了最先进的性能,即在给定任何音频波形的情况下,单词错误率可以高达80%。大量实验研究了不同参数(如信噪比、距离和攻角)对攻击成功率的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信