2020 IEEE Security and Privacy Workshops (SPW)最新文献

筛选
英文 中文
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks 用图像缩放攻击后门和毒害神经网络
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-03-19 DOI: 10.1109/SPW50608.2020.00024
Erwin Quiring, Konrad Rieck
{"title":"Backdooring and Poisoning Neural Networks with Image-Scaling Attacks","authors":"Erwin Quiring, Konrad Rieck","doi":"10.1109/SPW50608.2020.00024","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00024","url":null,"abstract":"Backdoors and poisoning attacks are a major threat to the security of machine-learning and vision systems. Often, however, these attacks leave visible artifacts in the images that can be visually detected and weaken the efficacy of the attacks. In this paper, we propose a novel strategy for hiding backdoor and poisoning attacks. Our approach builds on a recent class of attacks against image scaling. These attacks enable manipulating images such that they change their content when scaled to a specific resolution. By combining poisoning and image-scaling attacks, we can conceal the trigger of backdoors as well as hide the overlays of clean-label poisoning. Furthermore, we consider the detection of image-scaling attacks and derive an adaptive attack. In an empirical evaluation, we demonstrate the effectiveness of our strategy. First, we show that backdoors and poisoning work equally well when combined with image-scaling attacks. Second, we demonstrate that current detection defenses against image-scaling attacks are insufficient to uncover our manipulations. Overall, our work provides a novel means for hiding traces of manipulations, being applicable to different poisoning approaches.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129681191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Minimum-Norm Adversarial Examples on KNN and KNN based Models KNN和基于KNN模型的最小范数对抗例子
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-03-14 DOI: 10.1109/SPW50608.2020.00023
Chawin Sitawarin, David A. Wagner
{"title":"Minimum-Norm Adversarial Examples on KNN and KNN based Models","authors":"Chawin Sitawarin, David A. Wagner","doi":"10.1109/SPW50608.2020.00023","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00023","url":null,"abstract":"We study the robustness against adversarial examples of kNN classifiers and classifiers that combine kNN with neural networks. The main difficulty lies in the fact that finding an optimal attack on kNN is intractable for typical datasets. In this work, we propose a gradient-based attack on kNN and kNN-based defenses, inspired by the previous work by Sitawarin & Wagner [1]. We demonstrate that our attack outperforms their method on all of the models we tested with only a minimal increase in the computation time. The attack also beats the state-of-the-art attack [2] on kNN when $k > 1$ using less than 1% of its running time. We hope that this attack can be used as a new baseline for evaluating the robustness of kNN and its variants.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126537522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems 可信赖的学习网络物理系统的置信限
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-03-11 DOI: 10.1109/SPW50608.2020.00053
Dimitrios Boursinos, X. Koutsoukos
{"title":"Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems","authors":"Dimitrios Boursinos, X. Koutsoukos","doi":"10.1109/SPW50608.2020.00053","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00053","url":null,"abstract":"Cyber-physical systems (CPS) can benefit by the use of learning enabled components (LECs) such as deep neural networks (DNNs) for perception and decision making tasks. However, DNNs are typically non-transparent making reasoning about their predictions very difficult, and hence their application to safety-critical systems is very challenging. LECs could be integrated easier into CPS if their predictions could be complemented with a confidence measure that quantifies how much we trust their output. The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP). We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set. Then, these representations are used to estimate the confidence of set predictions from a classifier that is based on the neural network architecture used in the triplet. The approach is evaluated using a robotic navigation benchmark and the results show that we can computed trusted confidence bounds efficiently in real-time.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115263514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Out-of-Distribution Detection in Multi-Label Datasets using Latent Space of β-VAE 基于β-VAE潜空间的多标签数据的分布外检测
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-03-10 DOI: 10.1109/SPW50608.2020.00057
V. Sundar, Shreyas Ramakrishna, Zahra Rahiminasab, A. Easwaran, Abhishek Dubey
{"title":"Out-of-Distribution Detection in Multi-Label Datasets using Latent Space of β-VAE","authors":"V. Sundar, Shreyas Ramakrishna, Zahra Rahiminasab, A. Easwaran, Abhishek Dubey","doi":"10.1109/SPW50608.2020.00057","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00057","url":null,"abstract":"Learning Enabled Components (LECs) are widely being used in a variety of perceptions based autonomy tasks like image segmentation, object detection, end-to-end driving, etc. These components are trained with large image datasets with multimodal factors like weather conditions, time-of-day, traffic-density, etc. The LECs learn from these factors during training, and while testing if there is variation in any of these factors, the components get confused resulting in low confidence predictions. Those images with factor values, not seen, during training are commonly referred to as Out-of-Distribution (OOD). For safe autonomy, it is important to identify the OOD images, so that a suitable mitigation strategy can be performed. Classical one-class classifiers like SVM and SVDD are used to perform OOD detection. However, multiple labels attached to images in these datasets restrict the direct application of these techniques. We address this problem using the latent space of the $beta$ -Variational Autoencoder ($beta$ -VAE). We use the fact that compact latent space generated by an appropriately selected $beta$ - VAE will encode the information about these factors in a few latent variables, and that can be used for quick and computationally inexpensive detection. We evaluate our approach on the nuScenes dataset, and our results show the latent space of $beta$ - VAE is sensitive to encode changes in the values of the generative factor.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129916724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
On the Robustness of Cooperative Multi-Agent Reinforcement Learning 协同多智能体强化学习的鲁棒性研究
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-03-08 DOI: 10.1109/SPW50608.2020.00027
Jieyu Lin, Kristina Dzeparoska, S. Zhang, A. Leon-Garcia, Nicolas Papernot
{"title":"On the Robustness of Cooperative Multi-Agent Reinforcement Learning","authors":"Jieyu Lin, Kristina Dzeparoska, S. Zhang, A. Leon-Garcia, Nicolas Papernot","doi":"10.1109/SPW50608.2020.00027","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00027","url":null,"abstract":"In cooperative multi-agent reinforcement learning (c-MARL), agents learn to cooperatively take actions as a team to maximize a total team reward. We analyze the robustness of c-MARL to adversaries capable of attacking one of the agents on a team. Through the ability to manipulate this agent's observations, the adversary seeks to decrease the total team reward. Attacking c-MARL is challenging for three reasons: first, it is difficult to estimate team rewards or how they are impacted by an agent mispredicting; second, models are non-differentiable; and third, the feature space is low-dimensional. Thus, we introduce a novel attack. The attacker first trains a policy network with reinforcement learning to find a wrong action it should encourage the victim agent to take. Then, the adversary uses targeted adversarial examples to force the victim to take this action. Our results on the StartCraft II multi-agent benchmark demonstrate that c-MARL teams are highly vulnerable to perturbations applied to one of their agent's observations. By attacking a single agent, our attack method has highly negative impact on the overall team reward, reducing it from 20 to 9.4. This results in the team's winning rate to go down from 98.9% to 0%.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130153408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Partially Observable Games for Secure Autonomy* 安全自治的部分可观察博弈*
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-02-05 DOI: 10.1109/SPW50608.2020.00046
M. Ahmadi, A. Viswanathan, M. Ingham, K. Tan, A. Ames
{"title":"Partially Observable Games for Secure Autonomy*","authors":"M. Ahmadi, A. Viswanathan, M. Ingham, K. Tan, A. Ames","doi":"10.1109/SPW50608.2020.00046","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00046","url":null,"abstract":"Technology development efforts in autonomy and cyber-defense have been evolving independently of each other, over the past decade. In this paper, we report our ongoing effort to integrate these two presently distinct areas into a single framework. To this end, we propose the two-player partially observable stochastic game formalism to capture both high-level autonomous mission planning under uncertainty and adversarial decision making subject to imperfect information. We show that synthesizing sub-optimal strategies for such games is possible under finite-memory assumptions for both the autonomous decision maker and the cyber-adversary. We then describe an experimental testbed to evaluate the efficacy of the proposed framework.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132017757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Feasibility of Acoustic Attacks Using Commodity Smart Devices 利用商品智能设备进行声学攻击的可行性研究
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-01-20 DOI: 10.1109/SPW50608.2020.00031
Matt Wixey, Shane Johnson, Emiliano De Cristofaro
{"title":"On the Feasibility of Acoustic Attacks Using Commodity Smart Devices","authors":"Matt Wixey, Shane Johnson, Emiliano De Cristofaro","doi":"10.1109/SPW50608.2020.00031","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00031","url":null,"abstract":"Sound at frequencies above (ultrasonic) or below (infrasonic) the range of human hearing can, in some settings, cause adverse physiological and psychological effects to individuals. We investigate the feasibility of cyber-attacks that could make smart consumer devices produce possibly imperceptible sound at both high (17-21kHz) and low (60-100Hz) frequencies, at the maximum available volume setting, potentially turning them into acoustic cyber-weapons. To do so, we deploy attacks targeting different smart devices and take sound measurements in an anechoic chamber. For comparison, we also test possible attacks on traditional devices. Overall, we find that some of the devices tested are capable of reproducing frequencies within both high and low ranges, at levels exceeding those recommended in published guidelines. Generally speaking, such attacks are often trivial to develop and in many cases could be added to existing malware payloads, as they may be attractive to adversaries with specific motivations or targets. Finally, we suggest a number of countermeasures for detection and prevention.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127762586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adversarial Machine Learning-Industry Perspectives 对抗性机器学习——行业视角
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-01-20 DOI: 10.2139/ssrn.3532474
R. Kumar, Magnus Nyström, J. Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, Sharon Xia
{"title":"Adversarial Machine Learning-Industry Perspectives","authors":"R. Kumar, Magnus Nyström, J. Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, Sharon Xia","doi":"10.2139/ssrn.3532474","DOIUrl":"https://doi.org/10.2139/ssrn.3532474","url":null,"abstract":"Based on interviews with 28 organizations, we found that industry practitioners are not equipped with tactical and strategic tools to protect, detect and respond to attacks on their Machine Learning (ML) systems. We leverage the insights from the interviews and enumerate the gaps in securing machine learning systems when viewed in the context of traditional software security development. We write this paper from the perspective of two personas: developers/ML engineers and security incident responders. The goal of this paper is to layout the research agenda to amend the Security Development Lifecycle for industrial-grade software in the adversarial ML era.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121644090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 160
The Geometry of Syntax and Semantics for Directed File Transformations 定向文件转换的语法和语义几何
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2020-01-14 DOI: 10.1109/SPW50608.2020.00062
Steve Huntsman, Michael Robinson
{"title":"The Geometry of Syntax and Semantics for Directed File Transformations","authors":"Steve Huntsman, Michael Robinson","doi":"10.1109/SPW50608.2020.00062","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00062","url":null,"abstract":"We introduce a conceptual framework that associates syntax and semantics with vertical and horizontal directions in principal bundles and related constructions. This notion of geometry corresponds to a mechanism for performing goal-directed file transformations such as “eliminate unsafe syntax” and suggests various engineering practices.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117181450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems 哨兵:检测针对深度学习系统的局部通用攻击
2020 IEEE Security and Privacy Workshops (SPW) Pub Date : 2018-12-02 DOI: 10.1109/SPW50608.2020.00025
Edward Chou, Florian Tramèr, Giancarlo Pellegrino
{"title":"SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems","authors":"Edward Chou, Florian Tramèr, Giancarlo Pellegrino","doi":"10.1109/SPW50608.2020.00025","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00025","url":null,"abstract":"SentiNet is a novel detection framework for localized universal attacks on neural networks. These attacks restrict adversarial noise to contiguous portions of an image and are reusable with different images-constraints that prove useful for generating physically-realizable attacks. Unlike most other works on adversarial detection, SentiNet does not require training a model or preknowledge of an attack prior to detection. Our approach is appealing due to the large number of possible mechanisms and attack-vectors that an attack-specific defense would have to consider. By leveraging the neural network's susceptibility to attacks and by using techniques from model interpretability and object detection as detection mechanisms, SentiNet turns a weakness of a model into a strength. We demonstrate the effectiveness of SentiNet on three different attacks-i.e., data poisoning attacks, trojaned networks, and adversarial patches (including physically realizable attacks)-and show that our defense is able to achieve very competitive performance metrics for all three threats. Finally, we show that SentiNet is robust against strong adaptive adversaries, who build adversarial patches that specifically target the components of SentiNet's architecture.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128094367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 187
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信