Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning最新文献

筛选
英文 中文
Learning Model for Cyber-attack Index Based Virtual Wireless Network Selection 基于网络攻击指标的虚拟无线网络选择学习模型
Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2021-06-28 DOI: 10.1145/3468218.3469038
Naveen Naik Sapavath, D. Rawat
{"title":"Learning Model for Cyber-attack Index Based Virtual Wireless Network Selection","authors":"Naveen Naik Sapavath, D. Rawat","doi":"10.1145/3468218.3469038","DOIUrl":"https://doi.org/10.1145/3468218.3469038","url":null,"abstract":"With the availability of different wireless networks in wireless virtualization, dynamic network selection in a given heterogeneous environment is challenging task when there is cyber security and data privacy requirements for wireless users. Selection of low cyber risk network can result in good service experience to the users. Network selection in virtualized wireless environment is determined by various factors such as Quality of Experience (QoE), data loss prevention, security and privacy. In this paper, we propose a learning model for dynamic network selection based on cyber-attack index (CI) value of networks. We have develop a recommendation system which recommends user to select the most secure network with least CI value. A mathematical model based on least squares and convex optimization is presented which predicts the CI of network with goal of maximizing the number of wireless users/subscribers. Numerical results show that the CI based recommendation system outperforms the traditional prediction based systems. Furthermore, we compare our approach with existing approaches and found that the proposed approach results in better performance in terms maximizing the number of wireless users/subscribers and better services to them.","PeriodicalId":318719,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130022355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-cost Influence-Limiting Defense against Adversarial Machine Learning Attacks in Cooperative Spectrum Sensing 协同频谱感知中对抗对抗性机器学习攻击的低成本影响限制防御
Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2021-06-28 DOI: 10.1145/3468218.3469051
Zhengping Luo, Shangqing Zhao, Rui Duan, Zhuo Lu, Y. Sagduyu, Jie Xu
{"title":"Low-cost Influence-Limiting Defense against Adversarial Machine Learning Attacks in Cooperative Spectrum Sensing","authors":"Zhengping Luo, Shangqing Zhao, Rui Duan, Zhuo Lu, Y. Sagduyu, Jie Xu","doi":"10.1145/3468218.3469051","DOIUrl":"https://doi.org/10.1145/3468218.3469051","url":null,"abstract":"Cooperative spectrum sensing aims to improve the reliability of spectrum sensing by individual sensors for better utilization of the scarce spectrum bands, which gives the feasibility for secondary spectrum users to transmit their signals when primary users remain idle. However, there are various vulnerabilities experienced in cooperative spectrum sensing, especially when machine learning techniques are applied. The influence-limiting defense is proposed as a method to defend the data fusion center when a small number of spectrum sensing devices is controlled by an intelligent attacker to send erroneous sensing results. Nonetheless, this defense suffers from a computational complexity problem. In this paper, we propose a low-cost version of the influence-limiting defense and demonstrate that it can decrease the computation cost significantly (the time cost is reduced to less than 20% of the original defense) while still maintaining the same level of defense performance.","PeriodicalId":318719,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132861728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Inaudible Manipulation of Voice-Enabled Devices Through BackDoor Using Robust Adversarial Audio Attacks: Invited Paper 使用稳健的对抗性音频攻击通过后门操纵语音设备:邀请论文
Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2021-06-28 DOI: 10.1145/3468218.3469048
Morriel Kasher, Michael Zhao, Aryeh Greenberg, Devin Gulati, S. Kokalj-Filipovic, P. Spasojevic
{"title":"Inaudible Manipulation of Voice-Enabled Devices Through BackDoor Using Robust Adversarial Audio Attacks: Invited Paper","authors":"Morriel Kasher, Michael Zhao, Aryeh Greenberg, Devin Gulati, S. Kokalj-Filipovic, P. Spasojevic","doi":"10.1145/3468218.3469048","DOIUrl":"https://doi.org/10.1145/3468218.3469048","url":null,"abstract":"The BackDoor system provides a method for inaudibly transmitting messages that are recorded by unmodified receiver microphones as if they were transmitted audibly. Adversarial Audio attacks allow for an audio sample to sound like one message but be transcribed by a speech processing neural network as a different message. This study investigates the potential applications of Adversarial Audio through the BackDoor system to manipulate voice-enabled devices, or VEDs, without detection by humans or other nearby microphones. We discreetly transmit voice commands by applying robust, noise-resistant adversarial audio perturbations through BackDoor on top of a predetermined speech or music base sample to achieve a desired target transcription. Our analysis compares differing base carriers, target phrases, and perturbation strengths for maximal effectiveness through BackDoor. We determined that such an attack is feasible and that the desired adversarial properties of the audio sample are maintained even when transmitted through BackDoor.","PeriodicalId":318719,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128714996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adversarial Classification of the Attacks on Smart Grids Using Game Theory and Deep Learning 基于博弈论和深度学习的智能电网攻击的对抗性分类
Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2021-06-06 DOI: 10.1145/3468218.3469047
K. Hamedani, Lingjia Liu, Jithin Jagannath, Y. Yi
{"title":"Adversarial Classification of the Attacks on Smart Grids Using Game Theory and Deep Learning","authors":"K. Hamedani, Lingjia Liu, Jithin Jagannath, Y. Yi","doi":"10.1145/3468218.3469047","DOIUrl":"https://doi.org/10.1145/3468218.3469047","url":null,"abstract":"Smart grids are vulnerable to cyber-attacks. This paper proposes a game-theoretic approach to evaluate the variations caused by an attacker on the power measurements. Adversaries can gain financial benefits through the manipulation of the meters of smart grids. On the other hand, there is a defender that tries to maintain the accuracy of the meters. A zero-sum game is used to model the interactions between the attacker and defender. In this paper, two different defenders are used and the effectiveness of each defender in different scenarios is evaluated. Multi-layer perceptrons (MLPs) and traditional state estimators are the two defenders that are studied in this paper. The utility of the defender is also investigated in adversary-aware and adversary-unaware situations. Our simulations suggest that the utility which is gained by the adversary drops significantly when the MLP is used as the defender. It will be shown that the utility of the defender is variant in different scenarios, based on the defender that is being used. In the end, we will show that this zero-sum game does not yield a pure strategy, and the mixed strategy of the game is calculated.","PeriodicalId":318719,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132709321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variational Leakage: The Role of Information Complexity in Privacy Leakage 变分泄漏:信息复杂性在隐私泄漏中的作用
Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2021-06-05 DOI: 10.1145/3468218.3469040
A. A. Atashin, Behrooz Razeghi, D. Gunduz, S. Voloshynovskiy
{"title":"Variational Leakage: The Role of Information Complexity in Privacy Leakage","authors":"A. A. Atashin, Behrooz Razeghi, D. Gunduz, S. Voloshynovskiy","doi":"10.1145/3468218.3469040","DOIUrl":"https://doi.org/10.1145/3468218.3469040","url":null,"abstract":"We study the role of information complexity in privacy leakage about an attribute of an adversary's interest, which is not known a priori to the system designer. Considering the supervised representation learning setup and using neural networks to parameterize the variational bounds of information quantities, we study the impact of the following factors on the amount of information leakage: information complexity regularizer weight, latent space dimension, the cardinalities of the known utility and unknown sensitive attribute sets, the correlation between utility and sensitive attributes, and a potential bias in a sensitive attribute of adversary's interest. We conduct extensive experiments on Colored-MNIST and CelebA datasets to evaluate the effect of information complexity on the amount of intrinsic leakage.","PeriodicalId":318719,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122814762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Explainability-based Backdoor Attacks Against Graph Neural Networks 基于可解释性的后门攻击图神经网络
Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning Pub Date : 2021-04-08 DOI: 10.1145/3468218.3469046
Jing Xu, Minhui Xue, S. Picek
{"title":"Explainability-based Backdoor Attacks Against Graph Neural Networks","authors":"Jing Xu, Minhui Xue, S. Picek","doi":"10.1145/3468218.3469046","DOIUrl":"https://doi.org/10.1145/3468218.3469046","url":null,"abstract":"Backdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while performing normally on other benign inputs. There are already numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs). As such, there is no intensive research on explaining the impact of trigger injecting position on the performance of backdoor attacks on GNNs. To bridge this gap, we conduct an experimental investigation on the performance of backdoor attacks on GNNs. We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives - high attack success rate and low clean accuracy drop. Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness in selecting trigger injecting position for backdoor attacks on GNNs. For instance, on the node classification task, the backdoor attack with trigger injecting position selected by GraphLIME reaches over 84% attack success rate with less than 2.5% accuracy drop.","PeriodicalId":318719,"journal":{"name":"Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129803201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信