Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals

Wenhan Zhang;Marwan Krunz;Gregory Ditzler
{"title":"Stealthy Adversarial Attacks on Machine Learning-Based Classifiers of Wireless Signals","authors":"Wenhan Zhang;Marwan Krunz;Gregory Ditzler","doi":"10.1109/TMLCN.2024.3366161","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) has been successfully applied to classification tasks in many domains, including computer vision, cybersecurity, and communications. Although highly accurate classifiers have been developed, research shows that these classifiers are, in general, vulnerable to adversarial machine learning (AML) attacks. In one type of AML attack, the adversary trains a surrogate classifier (called the attacker’s classifier) to produce intelligently crafted low-power “perturbations” that degrade the accuracy of the targeted (defender’s) classifier. In this paper, we focus on radio frequency (RF) signal classifiers, and study their vulnerabilities to AML attacks. Specifically, we consider several exemplary protocol and modulation classifiers, designed using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). We first show the high accuracy of such classifiers under random noise (AWGN). We then study their performance under three types of low-power AML perturbations (FGSM, PGD, and DeepFool), considering different amounts of information at the attacker. On one extreme (so-called “white-box” attack), the attacker has complete knowledge of the defender’s classifier and its training data. As expected, our results reveal that in this case, the AML attack significantly degrades the defender’s classification accuracy. We gradually reduce the attacker’s knowledge and study five attack scenarios that represent different amounts of information at the attacker. Surprisingly, even when the attacker has limited or no knowledge of the defender’s classifier and its power is relatively low, the attack is still significant. We also study various practical issues related to the wireless environment, including channel impairments and misalignment between attacker and transmitter signals. Furthermore, we study the effectiveness of intermittent AML attacks. Even under such imperfections, a low-power AML attack can still significantly reduce the defender’s classification accuracy for both protocol and modulation classifiers. Lastly, we propose a two-step adversarial training mechanism to defend against AML attacks and contrast its performance against other state-of-the-art defense strategies. The proposed defense approach increases the classification accuracy by up to 50%, even in scenarios where the attacker has perfect knowledge of the defender and exhibits a relatively large power budget.","PeriodicalId":100641,"journal":{"name":"IEEE Transactions on Machine Learning in Communications and Networking","volume":"2 ","pages":"261-279"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10436107","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Machine Learning in Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10436107/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Machine learning (ML) has been successfully applied to classification tasks in many domains, including computer vision, cybersecurity, and communications. Although highly accurate classifiers have been developed, research shows that these classifiers are, in general, vulnerable to adversarial machine learning (AML) attacks. In one type of AML attack, the adversary trains a surrogate classifier (called the attacker’s classifier) to produce intelligently crafted low-power “perturbations” that degrade the accuracy of the targeted (defender’s) classifier. In this paper, we focus on radio frequency (RF) signal classifiers, and study their vulnerabilities to AML attacks. Specifically, we consider several exemplary protocol and modulation classifiers, designed using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). We first show the high accuracy of such classifiers under random noise (AWGN). We then study their performance under three types of low-power AML perturbations (FGSM, PGD, and DeepFool), considering different amounts of information at the attacker. On one extreme (so-called “white-box” attack), the attacker has complete knowledge of the defender’s classifier and its training data. As expected, our results reveal that in this case, the AML attack significantly degrades the defender’s classification accuracy. We gradually reduce the attacker’s knowledge and study five attack scenarios that represent different amounts of information at the attacker. Surprisingly, even when the attacker has limited or no knowledge of the defender’s classifier and its power is relatively low, the attack is still significant. We also study various practical issues related to the wireless environment, including channel impairments and misalignment between attacker and transmitter signals. Furthermore, we study the effectiveness of intermittent AML attacks. Even under such imperfections, a low-power AML attack can still significantly reduce the defender’s classification accuracy for both protocol and modulation classifiers. Lastly, we propose a two-step adversarial training mechanism to defend against AML attacks and contrast its performance against other state-of-the-art defense strategies. The proposed defense approach increases the classification accuracy by up to 50%, even in scenarios where the attacker has perfect knowledge of the defender and exhibits a relatively large power budget.
对基于机器学习的无线信号分类器的隐蔽性对抗攻击
机器学习(ML)已成功应用于计算机视觉、网络安全和通信等多个领域的分类任务。虽然已经开发出了高精度的分类器,但研究表明,这些分类器一般容易受到对抗性机器学习(AML)攻击。在一种对抗式机器学习攻击中,对手会训练一个代理分类器(称为攻击者分类器)来产生智能制作的低功耗 "扰动",从而降低目标(防御者)分类器的准确性。在本文中,我们将重点关注射频 (RF) 信号分类器,并研究它们在反洗钱攻击面前的脆弱性。具体来说,我们考虑了几个使用卷积神经网络(CNN)和递归神经网络(RNN)设计的示例协议和调制分类器。我们首先展示了这些分类器在随机噪声(AWGN)条件下的高准确性。然后,我们研究了它们在三种低功耗 AML 扰动(FGSM、PGD 和 DeepFool)下的性能,并考虑了攻击者的不同信息量。在一个极端(所谓的 "白盒 "攻击)中,攻击者完全了解防御者的分类器及其训练数据。不出所料,我们的结果表明,在这种情况下,反洗钱攻击会显著降低防御者的分类准确性。我们逐步减少攻击者的知识,并研究了代表攻击者不同信息量的五种攻击情况。令人惊讶的是,即使攻击者对防御者的分类器了解有限或完全不了解,而且其能力相对较低,攻击仍然很明显。我们还研究了与无线环境有关的各种实际问题,包括信道损伤以及攻击者和发射器信号之间的错位。此外,我们还研究了间歇性反洗钱攻击的有效性。即使在这种不完善的情况下,低功率反洗钱攻击仍能显著降低防御者对协议和调制分类器的分类准确性。最后,我们提出了一种两步对抗训练机制来防御反洗钱攻击,并将其性能与其他最先进的防御策略进行了对比。即使在攻击者完全了解防御者并表现出相对较大的功率预算的情况下,所提出的防御方法也能将分类准确率提高多达 50%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信