利用生成对抗网络保护物联网射频指纹系统

Kevin Merchant, Bryan D. Nousain
{"title":"利用生成对抗网络保护物联网射频指纹系统","authors":"Kevin Merchant, Bryan D. Nousain","doi":"10.1109/MILCOM47813.2019.9020907","DOIUrl":null,"url":null,"abstract":"Recently, a number of neural network approaches to physical-layer wireless security have been introduced. In particular, these approaches are able to authenticate the identity of different wireless transmitters by the device-specific imperfections present in their transmitted signals. In this paper, we introduce a weakness in the training protocol of these approaches, namely, that a generative adversarial network (GAN) can be trained to produce signals that are realistic enough to force classifier errors. We show that the GAN can learn to introduce signal imperfections without modifying the bandwidth or data contents of the signal, and demonstrate via experiment that classifiers trained only on transmissions from real devices are vulnerable to this sort of attack. Finally, we demonstrate that by augmenting the training dataset of the classifier with adversarial examples from a different GAN, we are able to strengthen the classifier against this vulnerability.","PeriodicalId":371812,"journal":{"name":"MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM)","volume":"393 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Securing IoT RF Fingerprinting Systems with Generative Adversarial Networks\",\"authors\":\"Kevin Merchant, Bryan D. Nousain\",\"doi\":\"10.1109/MILCOM47813.2019.9020907\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, a number of neural network approaches to physical-layer wireless security have been introduced. In particular, these approaches are able to authenticate the identity of different wireless transmitters by the device-specific imperfections present in their transmitted signals. In this paper, we introduce a weakness in the training protocol of these approaches, namely, that a generative adversarial network (GAN) can be trained to produce signals that are realistic enough to force classifier errors. We show that the GAN can learn to introduce signal imperfections without modifying the bandwidth or data contents of the signal, and demonstrate via experiment that classifiers trained only on transmissions from real devices are vulnerable to this sort of attack. Finally, we demonstrate that by augmenting the training dataset of the classifier with adversarial examples from a different GAN, we are able to strengthen the classifier against this vulnerability.\",\"PeriodicalId\":371812,\"journal\":{\"name\":\"MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM)\",\"volume\":\"393 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MILCOM47813.2019.9020907\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MILCOM47813.2019.9020907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

近年来,一些神经网络方法被引入到物理层无线安全中。特别是,这些方法能够通过其传输信号中存在的设备特定缺陷来验证不同无线发射器的身份。在本文中,我们介绍了这些方法的训练协议中的一个弱点,即生成对抗网络(GAN)可以被训练以产生足够真实的信号来强制分类器错误。我们表明,GAN可以在不修改信号带宽或数据内容的情况下学习引入信号缺陷,并通过实验证明,仅在来自真实设备的传输上训练的分类器容易受到这种攻击。最后,我们证明了通过使用来自不同GAN的对抗性示例来增强分类器的训练数据集,我们能够增强分类器针对此漏洞的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Securing IoT RF Fingerprinting Systems with Generative Adversarial Networks
Recently, a number of neural network approaches to physical-layer wireless security have been introduced. In particular, these approaches are able to authenticate the identity of different wireless transmitters by the device-specific imperfections present in their transmitted signals. In this paper, we introduce a weakness in the training protocol of these approaches, namely, that a generative adversarial network (GAN) can be trained to produce signals that are realistic enough to force classifier errors. We show that the GAN can learn to introduce signal imperfections without modifying the bandwidth or data contents of the signal, and demonstrate via experiment that classifiers trained only on transmissions from real devices are vulnerable to this sort of attack. Finally, we demonstrate that by augmenting the training dataset of the classifier with adversarial examples from a different GAN, we are able to strengthen the classifier against this vulnerability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信