Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy最新文献

筛选
英文 中文
Leveraging Disentangled Representations to Improve Vision-Based Keystroke Inference Attacks Under Low Data Constraints 利用解纠缠表示改进低数据约束下基于视觉的击键推理攻击
John Lim, Jan-Michael Frahm, F. Monrose
{"title":"Leveraging Disentangled Representations to Improve Vision-Based Keystroke Inference Attacks Under Low Data Constraints","authors":"John Lim, Jan-Michael Frahm, F. Monrose","doi":"10.1145/3508398.3511498","DOIUrl":"https://doi.org/10.1145/3508398.3511498","url":null,"abstract":"Keystroke inference attacks are a form of side-channel attacks in which an attacker leverages various techniques to recover a user's keystrokes as she inputs information into some display (e.g., while sending a text message or entering her pin). Typically, these attacks leverage machine learning approaches, but assessing the realism of the threat space has lagged behind the pace of machine learning advancements, due in-part, to the challenges in curating large real-life datasets. We aim to overcome the challenge of having limited number of real data by introducing a video domain adaptation technique that is able to leverage synthetic data through supervised disentangled learning. Specifically, for a given domain, we decompose the observed data into two factors of variation: Style and Content. Doing so provides four learned representations: real-life style, synthetic style, real-life content and synthetic content. Then, we combine them into feature representations from all combinations of style-content pairings across domains, and train a model on these combined representations to classify the content (i.e., labels) of a given datapoint in the style of another domain. We evaluate our method on real-life data using a variety of metrics to quantify the amount of information an attacker is able to recover. We show that our method prevents our model from overfitting to a small real-life training set, indicating that our method is an effective form of data augmentation, thereby making keystroke inference attacks more practical.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132302204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Deep Learning Based Access Control 基于深度学习的访问控制
M. N. Nobi, R. Krishnan, Yufei Huang, Mehrnoosh Shakarami, R. Sandhu
{"title":"Toward Deep Learning Based Access Control","authors":"M. N. Nobi, R. Krishnan, Yufei Huang, Mehrnoosh Shakarami, R. Sandhu","doi":"10.1145/3508398.3511497","DOIUrl":"https://doi.org/10.1145/3508398.3511497","url":null,"abstract":"A common trait of current access control approaches is the challenging need to engineer abstract and intuitive access control models. This entails designing access control information in the form of roles (RBAC), attributes (ABAC), or relationships (ReBAC) as the case may be, and subsequently, designing access control rules. This framework has its benefits but has significant limitations in the context of modern systems that are dynamic, complex, and large-scale, due to which it is difficult to maintain an accurate access control state in the system for a human administrator. This paper proposes Deep Learning Based Access Control (DLBAC) by leveraging significant advances in deep learning technology as a potential solution to this problem. We envision that DLBAC could complement and, in the long-term, has the potential to even replace, classical access control models with a neural network that reduces the burden of access control model engineering and updates. Without loss of generality, we conduct a thorough investigation of a candidate DLBAC model, called DLBAC_alpha, using both real-world and synthetic datasets. We demonstrate the feasibility of the proposed approach by addressing issues related to accuracy, generalization, and explainability. We also discuss challenges and future research directions.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127318782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Securing Smart Grids Through an Incentive Mechanism for Blockchain-Based Data Sharing 通过基于区块链的数据共享激励机制保护智能电网
Daniël Reijsbergen, Aung Maw, Tien Tuan Anh Dinh, Wen-Tai Li, C. Yuen
{"title":"Securing Smart Grids Through an Incentive Mechanism for Blockchain-Based Data Sharing","authors":"Daniël Reijsbergen, Aung Maw, Tien Tuan Anh Dinh, Wen-Tai Li, C. Yuen","doi":"10.1145/3508398.3511504","DOIUrl":"https://doi.org/10.1145/3508398.3511504","url":null,"abstract":"Smart grids leverage the data collected from smart meters to make important operational decisions. However, they are vulnerable to False Data Injection (FDI) attacks in which an attacker manipulates meter data to disrupt the grid operations. Existing works on FDI are based on a simple threat model in which a single grid operator has access to all the data, and only some meters can be compromised. Our goal is to secure smart grids against FDI under a realistic threat model. To this end, we present a threat model in which there are multiple operators, each with a partial view of the grid, and each can be fully compromised. An effective defense against FDI in this setting is to share data between the operators. However, the main challenge here is to incentivize data sharing. We address this by proposing an incentive mechanism that rewards operators for uploading data, but penalizes them if the data is missing or anomalous. We derive formal conditions under which our incentive mechanism is provably secure against operators who withhold or distort measurement data for profit. We then implement the data sharing solution on a private blockchain, introducing several optimizations that overcome the inherent performance limitations of the blockchain. Finally, we conduct an experimental evaluation that demonstrates that our implementation has practical performance.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117168047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A TOCTOU Attack on DICE Attestation
S. Hristozov, Moritz Wettermann, Manuel Huber
{"title":"A TOCTOU Attack on DICE Attestation","authors":"S. Hristozov, Moritz Wettermann, Manuel Huber","doi":"10.1145/3508398.3511507","DOIUrl":"https://doi.org/10.1145/3508398.3511507","url":null,"abstract":"A major security challenge for modern IoT deployments is to ensure that the devices run legitimate firmware free from malware. This challenge can be addressed through a security primitive called attestation which allows a remote backend to verify the firmware integrity of the devices it manages. In order to accelerate broad attestation adoption in the IoT domain the Trusted Computing Group (TCG) has introduced the Device Identifier Composition Engine (DICE) series of specifications. DICE is a hardware-software architecture for constrained, e.g., microcontroller-based IoT devices where the firmware is divided into successively executed layers. In this paper, we demonstrate a remote Time-Of-Check Time-Of-Use (TOCTOU) attack on DICE-based attestation. We demonstrate that it is possible to install persistent malware in the flash memory of a constrained microcontroller that cannot be detected through DICE-based attestation. The main idea of our attack is to install malware during runtime of application logic in the top firmware layer. The malware reads the valid attestation key and stores it on the device's flash memory. After reboot, the malware uses the previously stored key for all subsequent attestations to the backend. We conduct the installation of malware and copying of the key through Return-Oriented Programming (ROP). As a platform for our demonstration, we use the Cortex-M-based nRF52840 microcontroller. We provide a discussion of several possible countermeasures which can mitigate the shortcomings of the DICE specifications.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116975813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Privacy-Preserving Maximum Matching on General Graphs and its Application to Enable Privacy-Preserving Kidney Exchange 一般图上的保隐私最大匹配及其在保隐私肾交换中的应用
Malte Breuer, Ulrike Meyer, S. Wetzel
{"title":"Privacy-Preserving Maximum Matching on General Graphs and its Application to Enable Privacy-Preserving Kidney Exchange","authors":"Malte Breuer, Ulrike Meyer, S. Wetzel","doi":"10.1145/3508398.3511509","DOIUrl":"https://doi.org/10.1145/3508398.3511509","url":null,"abstract":"To this day, there are still some countries where the exchange of kidneys between multiple incompatible patient-donor pairs is restricted by law. Typically, legal regulations in this context are put in place to prohibit coercion and manipulation in order to prevent a market for organ trade. Yet, in countries where kidney exchange is practiced, existing platforms to facilitate such exchanges generally lack sufficient privacy mechanisms. In this paper, we propose a privacy-preserving protocol for kidney exchange that not only addresses the privacy problem of existing platforms but also is geared to lead the way in overcoming legal issues in those countries where kidney exchange is still not practiced. In our approach, we use the concept of secret sharing to distribute the medical data of patients and donors among a set of computing peers in a privacy-preserving fashion. These computing peers then execute our new Secure Multi-Party Computation (SMPC) protocol among each other to determine an optimal set of kidney exchanges. As part of our new protocol, we devise a privacy-preserving solution to the maximum matching problem on general graphs. We have implemented the protocol in the SMPC benchmarking framework MP-SPDZ and provide a comprehensive performance evaluation. Furthermore, we analyze the practicality of our protocol when used in a dynamic setting where patients and donors arrive and depart over time) based on a data set from the United Network for Organ Sharing.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134311230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning DP-UTIL:机器学习中差分隐私的综合效用分析
Ismat Jarin, Birhanu Eshete
{"title":"DP-UTIL: Comprehensive Utility Analysis of Differential Privacy in Machine Learning","authors":"Ismat Jarin, Birhanu Eshete","doi":"10.1145/3508398.3511513","DOIUrl":"https://doi.org/10.1145/3508398.3511513","url":null,"abstract":"Differential Privacy (DP) has emerged as a rigorous formalism to quantify privacy protection provided by an algorithm that operates on privacy sensitive data. In machine learning (ML), DP has been employed to limit inference/disclosure of training examples. Prior work leveraged DP across the ML pipeline, albeit in isolation, often focusing on mechanisms such as gradient perturbation. In this paper, we present DP-UTIL, a holistic utility analysis framework of DP across the ML pipeline with focus on input perturbation, objective perturbation, gradient perturbation, output perturbation, and prediction perturbation. Given an ML task on privacy-sensitive data, DP-UTIL enables a ML privacy practitioner to perform holistic comparative analysis on the impact of DP in these five perturbation spots, measured in terms of model utility loss, privacy leakage, and the number of truly revealed training samples. We evaluate DP-UTIL over classification tasks on vision, medical, and financial datasets, using two representative learning algorithms (logistic regression and deep neural network) against membership inference attack as a case study attack. One of the highlights of our results is that prediction perturbation consistently achieves the lowest utility loss on all models across all datasets. In logistic regression models, objective perturbation results in lowest privacy leakage compared to other perturbation techniques. For deep neural networks, gradient perturbation results in lowest privacy leakage. Moreover, our results on true revealed records suggest that as privacy leakage increases a differentially private model reveals a greater number of member samples. Overall, our findings suggest that to make informed decisions as to the choice of perturbation mechanisms, a ML privacy practitioner needs to examine the dynamics among optimization techniques (convex vs. non-convex), number of classes, and privacy budget.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130413043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
EG-Booster: Explanation-Guided Booster of ML Evasion Attacks EG-Booster:解释引导的ML逃避攻击助推器
Abderrahmen Amich, Birhanu Eshete
{"title":"EG-Booster: Explanation-Guided Booster of ML Evasion Attacks","authors":"Abderrahmen Amich, Birhanu Eshete","doi":"10.1145/3508398.3511510","DOIUrl":"https://doi.org/10.1145/3508398.3511510","url":null,"abstract":"The widespread usage of machine learning (ML) in a myriad of domains has raised questions about its trustworthiness in high-stakes environments. Part of the quest for trustworthy ML is assessing robustness to test-time adversarial examples. Inline with the trustworthy ML goal, a useful input to potentially aid robustness evaluation is feature-based explanations of model predictions. In this paper, we present a novel approach, called EG-Booster, that leverages techniques from explainable ML to guide adversarial example crafting for improved robustness evaluation of ML models. The key insight in EG-Booster is the use of feature-based explanations of model predictions to guide adversarial example crafting by adding consequential perturbations (likely to result in model evasion) and avoiding non-consequential perturbations (unlikely to contribute to evasion). EG-Booster is agnostic to model architecture, threat model, and supports diverse distance metrics used in the literature. We evaluate EG-Booster using image classification benchmark datasets: MNIST and CIFAR10. Our findings suggest that EG-Booster significantly improves the evasion rate of state-of-the-art attacks while performing a smaller number of perturbations. Through extensive experiments that cover four white-box and three black-box attacks, we demonstrate the effectiveness of EG-Booster against two undefended neural networks trained on MNIST and CIFAR10, and an adversarially-trained ResNet model trained on CIFAR10. Furthermore, we introduce a stability assessment metric and evaluate the reliability of our explanation-based attack boosting approach by tracking the similarity between the model's predictions across multiple runs of EG-Booster. Our results over 10 separate runs suggest that EG-Booster's output is stable across distinct runs. Combined with state-of-the-art attacks, we hope EG-Booster will be used towards improved robustness assessment of ML models against evasion attacks.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126676084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Genomic Data Sharing under Dependent Local Differential Privacy 依赖局部差分隐私下的基因组数据共享
Emre Yilmaz, Tianxi Ji, Erman Ayday, Pan Li
{"title":"Genomic Data Sharing under Dependent Local Differential Privacy","authors":"Emre Yilmaz, Tianxi Ji, Erman Ayday, Pan Li","doi":"10.1145/3508398.3511519","DOIUrl":"https://doi.org/10.1145/3508398.3511519","url":null,"abstract":"Privacy-preserving genomic data sharing is prominent to increase the pace of genomic research, and hence to pave the way towards personalized genomic medicine. In this paper, we introduce (ε , T$)-dependent local differential privacy (LDP) for privacy-preserving sharing of correlated data and propose a genomic data sharing mechanism under this privacy definition. We first show that the original definition of LDP is not suitable for genomic data sharing, and then we propose a new mechanism to share genomic data. The proposed mechanism considers the correlations in data during data sharing, eliminates statistically unlikely data values beforehand, and adjusts the probability distributions for each shared data point accordingly. By doing so, we show that we can avoid an attacker from inferring the correct values of the shared data points by utilizing the correlations in the data. By adjusting the probability distributions of the shared states of each data point, we also improve the utility of shared data for the data collector. Furthermore, we develop a greedy algorithm that strategically identifies the processing order of the shared data points with the aim of maximizing the utility of the shared data. Our evaluation results on a real-life genomic dataset show the superiority of the proposed mechanism compared to the randomized response mechanism (a widely used technique to achieve LDP).","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128064134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信