Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning最新文献

筛选
英文 中文
Retracted on July 26, 2022: Open set recognition through unsupervised and class-distance learning 2022年7月26日撤销:通过无监督和课堂远程学习开放集识别
Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2020-07-13 DOI: 10.1145/3395352.3402901
Andrew Draganov, Carter Brown, Enrico Mattei, Cass Dalton, Jaspreet Ranjit
{"title":"Retracted on July 26, 2022: Open set recognition through unsupervised and class-distance learning","authors":"Andrew Draganov, Carter Brown, Enrico Mattei, Cass Dalton, Jaspreet Ranjit","doi":"10.1145/3395352.3402901","DOIUrl":"https://doi.org/10.1145/3395352.3402901","url":null,"abstract":"This article has been retracted from the ACM Digital Library because of Author Misrepresentation. The ACM published paper used an earlier work written by Xudong Wang, Stella Yu, Long Lian, Andrew Draganov, Carter Brown, Enrico Mattie, Cass Dalton and Jasprett Ranit. Xudong Wang, Stella Yu and Long Lian were not included as authors on the ACM paper. As a result, ACM retracted the Work from the Digital Library on July 26, 2022. The retracted Work remains in the ACM Digital Library for archiving purposes only and should not be used for further research or citation purposes.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121177505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adversarial machine learning based partial-model attack in IoT 物联网中基于部分模型攻击的对抗性机器学习
Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2020-06-25 DOI: 10.1145/3395352.3402619
Zhengping Luo, Shangqing Zhao, Zhuo Lu, Y. Sagduyu, Jie Xu
{"title":"Adversarial machine learning based partial-model attack in IoT","authors":"Zhengping Luo, Shangqing Zhao, Zhuo Lu, Y. Sagduyu, Jie Xu","doi":"10.1145/3395352.3402619","DOIUrl":"https://doi.org/10.1145/3395352.3402619","url":null,"abstract":"As Internet of Things (IoT) has emerged as the next logical stage of the Internet, it has become imperative to understand the vulnerabilities of the IoT systems when supporting diverse applications. Because machine learning has been applied in many IoT systems, the security implications of machine learning need to be studied following an adversarial machine learning approach. In this paper, we propose an adversarial machine learning based partial-model attack in the data fusion/aggregation process of IoT by only controlling a small part of the sensing devices. Our numerical results demonstrate the feasibility of this attack to disrupt the decision making in data fusion with limited control of IoT devices, e.g., the attack success rate reaches 83% when the adversary tampers with only 8 out of 20 IoT devices. These results show that the machine learning engine of IoT system is highly vulnerable to attacks even when the adversary manipulates a small portion of IoT devices, and the outcome of these attacks severely disrupts IoT system operations.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121722962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Over-the-air membership inference attacks as privacy threats for deep learning-based wireless signal classifiers 无线成员推理攻击对基于深度学习的无线信号分类器的隐私威胁
Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2020-06-25 DOI: 10.1145/3395352.3404070
Yi Shi, Kemal Davaslioglu, Y. Sagduyu
{"title":"Over-the-air membership inference attacks as privacy threats for deep learning-based wireless signal classifiers","authors":"Yi Shi, Kemal Davaslioglu, Y. Sagduyu","doi":"10.1145/3395352.3404070","DOIUrl":"https://doi.org/10.1145/3395352.3404070","url":null,"abstract":"This paper presents how to leak private information from a wireless signal classifier by launching an over-the-air membership inference attack (MIA). As machine learning (ML) algorithms are used to process wireless signals to make decisions such as PHY-layer authentication, the training data characteristics (e.g., device-level information) and the environment conditions (e.g., channel information) under which the data is collected may leak to the ML model. As a privacy threat, the adversary can use this leaked information to exploit vulnerabilities of the ML model following an adversarial ML approach. In this paper, the MIA is launched against a deep learning-based classifier that uses waveform, device, and channel characteristics (power and phase shifts) in the received signals for RF fingerprinting. By observing the spectrum, the adversary builds first a surrogate classifier and then an inference model to determine whether a signal of interest has been used in the training data of the receiver (e.g., a service provider). The signal of interest can then be associated with particular device and channel characteristics to launch subsequent attacks. The probability of attack success is high (more than 88% depending on waveform and channel conditions) in identifying signals of interest (and potentially the device and channel information) used to build a target classifier. These results show that wireless signal classifiers are vulnerable to privacy threats due to the over-the-air information leakage of their ML models.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132244626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Algorithm selection framework for cyber attack detection 网络攻击检测算法选择框架
Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2020-05-28 DOI: 10.1145/3395352.3402623
Marc Chalé, Nathaniel D. Bastian, J. Weir
{"title":"Algorithm selection framework for cyber attack detection","authors":"Marc Chalé, Nathaniel D. Bastian, J. Weir","doi":"10.1145/3395352.3402623","DOIUrl":"https://doi.org/10.1145/3395352.3402623","url":null,"abstract":"The number of cyber threats against both wired and wireless computer systems and other components of the Internet of Things continues to increase annually. In this work, an algorithm selection framework is employed on the NSL-KDD data set and a novel paradigm of machine learning taxonomy is presented. The framework uses a combination of user input and meta-features to select the best algorithm to detect cyber attacks on a network. Performance is compared between a rule-of-thumb strategy and a meta-learning strategy. The framework removes the conjecture of the common trial-and-error algorithm selection method. The framework recommends five algorithms from the taxonomy. Both strategies recommend a high-performing algorithm, though not the best performing. The work demonstrates the close connectedness between algorithm selection and the taxonomy for which it is premised.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132827631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Investigating a spectral deception loss metric for training machine learning-based evasion attacks 研究用于训练基于机器学习的逃避攻击的频谱欺骗损失度量
Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2020-05-27 DOI: 10.1145/3395352.3402624
Matthew DelVecchio, Vanessa Arndorfer, W. Headley
{"title":"Investigating a spectral deception loss metric for training machine learning-based evasion attacks","authors":"Matthew DelVecchio, Vanessa Arndorfer, W. Headley","doi":"10.1145/3395352.3402624","DOIUrl":"https://doi.org/10.1145/3395352.3402624","url":null,"abstract":"Adversarial evasion attacks have been very successful in causing poor performance in a wide variety of machine learning applications. One such application is radio frequency spectrum sensing. While evasion attacks have proven particularly successful in this area, they have done so at the detriment of the signal's intended purpose. More specifically for real-world applications of interest, the resulting perturbed signal that is transmitted to evade an eavesdropper must not deviate far from the original signal, less the intended information is destroyed. Recent work by the authors and others has demonstrated an attack framework that allows for intelligent balancing between these conflicting goals of evasion and communication. However, while these methodologies consider creating adversarial signals that minimize communications degradation, they have been shown to do so at the expense of the spectral shape of the signal. This opens the adversarial signal up to defenses at the eavesdropper such as filtering, which could render the attack ineffective. To remedy this, this work introduces a new spectral deception loss metric that can be implemented during the training process to force the spectral shape to be more in-line with the original signal. As an initial proof of concept, a variety of methods are presented that provide a starting point for this proposed loss. Through performance analysis, it is shown that these techniques are effective in controlling the shape of the adversarial signal.","PeriodicalId":370816,"journal":{"name":"Proceedings of the 2nd ACM Workshop on Wireless Security and Machine Learning","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134559866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信