{"title":"Securing IoT RF Fingerprinting Systems with Generative Adversarial Networks","authors":"Kevin Merchant, Bryan D. Nousain","doi":"10.1109/MILCOM47813.2019.9020907","DOIUrl":null,"url":null,"abstract":"Recently, a number of neural network approaches to physical-layer wireless security have been introduced. In particular, these approaches are able to authenticate the identity of different wireless transmitters by the device-specific imperfections present in their transmitted signals. In this paper, we introduce a weakness in the training protocol of these approaches, namely, that a generative adversarial network (GAN) can be trained to produce signals that are realistic enough to force classifier errors. We show that the GAN can learn to introduce signal imperfections without modifying the bandwidth or data contents of the signal, and demonstrate via experiment that classifiers trained only on transmissions from real devices are vulnerable to this sort of attack. Finally, we demonstrate that by augmenting the training dataset of the classifier with adversarial examples from a different GAN, we are able to strengthen the classifier against this vulnerability.","PeriodicalId":371812,"journal":{"name":"MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM)","volume":"393 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MILCOM47813.2019.9020907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Recently, a number of neural network approaches to physical-layer wireless security have been introduced. In particular, these approaches are able to authenticate the identity of different wireless transmitters by the device-specific imperfections present in their transmitted signals. In this paper, we introduce a weakness in the training protocol of these approaches, namely, that a generative adversarial network (GAN) can be trained to produce signals that are realistic enough to force classifier errors. We show that the GAN can learn to introduce signal imperfections without modifying the bandwidth or data contents of the signal, and demonstrate via experiment that classifiers trained only on transmissions from real devices are vulnerable to this sort of attack. Finally, we demonstrate that by augmenting the training dataset of the classifier with adversarial examples from a different GAN, we are able to strengthen the classifier against this vulnerability.