Arezoo Rajabi, D. Sahabandu, Luyao Niu, B. Ramasubramanian, R. Poovendran
{"title":"LDL: A Defense for Label-Based Membership Inference Attacks","authors":"Arezoo Rajabi, D. Sahabandu, Luyao Niu, B. Ramasubramanian, R. Poovendran","doi":"10.1145/3579856.3582821","DOIUrl":"https://doi.org/10.1145/3579856.3582821","url":null,"abstract":"The data used to train deep neural network (DNN) models in applications such as healthcare and finance typically contain sensitive information. A DNN model may suffer from overfitting– it will perform very well on samples seen during training, and poorly on samples not seen during training. Overfitted models have been shown to be susceptible to query-based attacks such as membership inference attacks (MIAs). MIAs aim to determine whether a sample belongs to the dataset used to train a classifier (members) or not (nonmembers). Recently, a new class of label-based MIAs (LAB MIAs) was proposed, where an adversary was only required to have knowledge of predicted labels of samples. LAB MIAs used the insight that member samples were typically located farther away from a classification decision boundary than nonmembers, and were shown to be highly effective across multiple datasets. Developing a defense against an adversary carrying out a LAB MIA on DNN models that cannot be retrained remains an open problem. We present LDL, a light weight defense against LAB MIAs. LDL works by constructing a high-dimensional sphere around queried samples such that the model decision is unchanged for (noisy) variants of the sample within the sphere. This sphere of label-invariance creates ambiguity and prevents a querying adversary from correctly determining whether a sample is a member or a nonmember. We analytically characterize the success rate of an adversary carrying out a LAB MIA when LDL is deployed, and show that the formulation is consistent with experimental observations. We evaluate LDL on seven datasets– CIFAR-10, CIFAR-100, GTSRB, Face, Purchase, Location, and Texas– with varying sizes of training data. All of these datasets have been used by SOTA LAB MIAs. Our experiments demonstrate that LDL reduces the success rate of an adversary carrying out a LAB MIA in each case. We empirically compare LDL with defenses against LAB MIAs that require retraining of DNN models, and show that LDL performs favorably despite not needing to retrain the DNNs.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131146231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Electromagnetic Signal Injection Attacks on Differential Signaling","authors":"Youqian Zhang, Kasper Bonne Rasmussen","doi":"10.1145/3579856.3590326","DOIUrl":"https://doi.org/10.1145/3579856.3590326","url":null,"abstract":"Differential signaling is a method of data transmission that uses two complementary electrical signals to encode information. This allows a receiver to reject any noise by looking at the difference between the two signals, assuming the noise affects both signals equally. Many protocols such as USB, Ethernet, and HDMI use differential signaling to achieve a robust communication channel in a noisy environment. This generally works well and has led many to believe that it is infeasible to remotely inject attacking signals into such a differential pair. In this paper, we challenge this assumption and show that an adversary can in fact inject malicious signals from a distance, purely using common-mode injection, i.e., injecting into both wires at the same time. We explain in detail the principles that an attacker can exploit to achieve a successful injection of an arbitrary bit, and we analyze the success rate of injecting longer arbitrary messages. We demonstrate the attack on a real system and show that the success rate can reach as high as . Finally, we present a case study where we wirelessly inject a message into a Controller Area Network (CAN) bus, which is a differential signaling bus protocol used in many critical applications, including the automotive and aviation sector.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"1996 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127318639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CASSOCK: Viable Backdoor Attacks against DNN in the Wall of Source-Specific Backdoor Defenses","authors":"Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang, W. Susilo","doi":"10.1145/3579856.3582829","DOIUrl":"https://doi.org/10.1145/3579856.3582829","url":null,"abstract":"As a critical threat to deep neural networks (DNNs), backdoor attacks can be categorized into two types, i.e., source-agnostic backdoor attacks (SABAs) and source-specific backdoor attacks (SSBAs). Compared to traditional SABAs, SSBAs are more advanced in that they have superior stealthier in bypassing mainstream countermeasures that are effective against SABAs. Nonetheless, existing SSBAs suffer from two major limitations. First, they can hardly achieve a good trade-off between ASR (attack success rate) and FPR (false positive rate). Besides, they can be effectively detected by the state-of-the-art (SOTA) countermeasures (e.g., SCAn [40]). To address the limitations above, we propose a new class of viable source-specific backdoor attacks coined as CASSOCK. Our key insight is that trigger designs when creating poisoned data and cover data in SSBAs play a crucial role in demonstrating a viable source-specific attack, which has not been considered by existing SSBAs. With this insight, we focus on trigger transparency and content when crafting triggers for poisoned dataset where a sample has an attacker-targeted label and cover dataset where a sample has a ground-truth label. Specifically, we implement CASSOCKTrans that designs a trigger with heterogeneous transparency to craft poisoned and cover datasets, presenting better attack performance than existing SSBAs. We also propose CASSOCKCont that extracts salient features of the attacker-targeted label to generate a trigger, entangling the trigger features with normal features of the label, which is stealthier in bypassing the SOTA defenses. While both CASSOCKTrans and CASSOCKCont are orthogonal, they are complementary to each other, generating a more powerful attack, called CASSOCKComp, with further improved attack performance and stealthiness. To demonstrate their viability, we perform a comprehensive evaluation of the three CASSOCK-based attacks on four popular datasets (i.e., MNIST, CIFAR10, GTSRB and LFW) and three SOTA defenses (i.e., extended Neural Cleanse [45], Februus [8], and SCAn [40]). Compared with a representative SSBA as a baseline (SSBABase), CASSOCK-based attacks have significantly advanced the attack performance, i.e., higher ASR and lower FPR with comparable CDA (clean data accuracy). Besides, CASSOCK-based attacks have effectively bypassed the SOTA defenses, and SSBABase cannot.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128232567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Genkin, William Kosasih, Fangfei Liu, Anna Trikalinou, Thomas Unterluggauer, Y. Yarom
{"title":"CacheFX: A Framework for Evaluating Cache Security","authors":"Daniel Genkin, William Kosasih, Fangfei Liu, Anna Trikalinou, Thomas Unterluggauer, Y. Yarom","doi":"10.1145/3579856.3595794","DOIUrl":"https://doi.org/10.1145/3579856.3595794","url":null,"abstract":"Over the last two decades, the danger of sharing resources between programs has been repeatedly highlighted. Multiple side-channel attacks, which seek to exploit shared components for leaking information, have been devised, mostly targeting shared caching components. In response, the research community has proposed multiple cache designs that aim at curbing the source of side channels. With multiple competing designs, there is a need for assessing the level of security against side-channel attacks that each design offers. Several metrics have been suggested for performing such evaluations. However, these tend to be limited both in terms of the potential adversaries they consider and in the applicability of the metric to real-world attacks, as opposed to attack techniques. Moreover, all existing metrics implicitly assume that a single metric can encompass the nuances of side-channel security. In this work we propose CacheFX, a flexible framework for assessing and evaluating the resilience of cache designs to side-channel attacks. CacheFX allows the evaluator to implement various cache designs, victims, and attackers, as well as to exercise them for assessing the leakage of information via the cache. To demonstrate the power of CacheFX, we implement multiple cache designs and replacement algorithms, and devise three evaluation metrics that measure different aspects of the caches: (1) the entropy induced by a memory access; (2) the complexity of building an eviction set; (3) protection against cryptographic attacks; Our experiments highlight that different security metrics give different insights to designs, making a comprehensive analysis mandatory. For instance, while eviction-set building was fastest for randomized skewed caches, these caches featured lower eviction entropy and higher practical attack complexity. Our experiments show that all non-partitioned designs allow for effective cryptographic attacks. However, in state-of-the-art secure caches, eviction-based attacks are more difficult to mount than occupancy-based attacks, highlighting the need to consider the latter in cache design.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"49 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114318148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiyi Zhang, Hansheng Fang, W. Tann, Ke Xu, Chengfang Fang, E. Chang
{"title":"Mitigating Adversarial Attacks by Distributing Different Copies to Different Buyers","authors":"Jiyi Zhang, Hansheng Fang, W. Tann, Ke Xu, Chengfang Fang, E. Chang","doi":"10.1145/3579856.3582824","DOIUrl":"https://doi.org/10.1145/3579856.3582824","url":null,"abstract":"Machine learning models are vulnerable to adversarial attacks. In this paper, we consider the scenario where a model is distributed to multiple buyers, among which a malicious buyer attempts to attack another buyer. The malicious buyer probes its copy of the model to search for adversarial samples and then presents the found samples to the victim’s copy of the model in order to replicate the attack. We point out that by distributing different copies of the model to different buyers, we can mitigate the attack such that adversarial samples found on one copy would not work on another copy. We observed that training a model with different randomness indeed mitigates such replication to a certain degree. However, there is no guarantee and retraining is computationally expensive. A number of works extended the retraining method to enhance the differences among models. However, a very limited number of models can be produced using such methods and the computational cost becomes even higher. Therefore, we propose a flexible parameter rewriting method that directly modifies the model’s parameters. This method does not require additional training and is able to generate a large number of copies in a more controllable manner, where each copy induces different adversarial regions. Experimentation studies show that rewriting can significantly mitigate the attacks while retaining high classification accuracy. For instance, on GTSRB dataset with respect to Hop Skip Jump attack, using attractor-based rewriter can reduce the success rate of replicating the attack to 0.5% while independently training copies with different randomness can reduce the success rate to 6.5%. From this study, we believe that there are many further directions worth exploring.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125911436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks","authors":"Zitao Chen, Pritam Dash, K. Pattabiraman","doi":"10.1145/3579856.3582816","DOIUrl":"https://doi.org/10.1145/3579856.3582816","url":null,"abstract":"Adversarial patch attacks create adversarial examples by injecting arbitrary distortions within a bounded region of the input to fool deep neural networks (DNNs). These attacks are robust (i.e., physically-realizable) and universally malicious, and hence represent a severe security threat to real-world DNN-based systems. We propose Jujutsu, a two-stage technique to detect and mitigate robust and universal adversarial patch attacks. We first observe that adversarial patches are crafted as localized features that yield large influence on the prediction output, and continue to dominate the prediction on any input. Jujutsu leverages this observation for accurate attack detection with low false positives. Patch attacks corrupt only a localized region of the input, while the majority of the input remains unperturbed. Therefore, Jujutsu leverages generative adversarial networks (GAN) to perform localized attack recovery by synthesizing the semantic contents of the input that are corrupted by the attacks, and reconstructs a “clean” input for correct prediction. We evaluate Jujutsu on four diverse datasets spanning 8 different DNN models, and find that it achieves superior performance and significantly outperforms four existing defenses. We further evaluate Jujutsu against physical-world attacks, as well as adaptive attacks.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133817612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"T-TER: Defeating A2 Trojans with Targeted Tamper-Evident Routing","authors":"Timothy Trippel, K. Shin, K. Bush, Matthew Hicks","doi":"10.1145/3579856.3582837","DOIUrl":"https://doi.org/10.1145/3579856.3582837","url":null,"abstract":"Since the inception of the Integrated Circuit (IC), the size of the transistors used to construct them has continually shrunk. While this advancement significantly improves computing capability, fabrication costs have skyrocketed. As a result, most IC designers must now outsource fabrication. Outsourcing, however, presents a security threat: comprehensive post-fabrication inspection is infeasible given the size of modern ICs, so it is nearly impossible to know if the foundry has altered the original design during fabrication (i.e., inserted a hardware Trojan). Defending against a foundry-side adversary is challenging because—even with as few as two gates—hardware Trojans can completely undermine software security. Researchers have attempted to both detect and prevent foundry-side attacks, but all existing defenses are ineffective against additive Trojans with footprints of a few gates or less. We present Targeted Tamper-Evident Routing (T-TER), a layout-level defense against untrusted foundries, capable of thwarting the insertion of even the stealthiest hardware Trojans. T-TER is directed and routing-centric: it prevents foundry-side attackers from routing Trojan wires to, or directly adjacent to, security-critical wires by shielding them with guard wires. Unlike shield wires commonly deployed for cross-talk reduction, T-TER guard wires pose an additional technical challenge: they must be tamper-evident in both the digital (deletion attacks) and analog (move and jog attacks) domains. We address this challenge by developing a class of designed-in guard wires that are added to the design specifically to protect security-critical wires. T-TER’s guard wires incur minimal overhead, scale with design complexity, and provide tamper-evidence against attacks. We implement automated tools (on top of commercial CAD tools) for deploying guard wires around targeted nets within an open-source System-on-Chip. Lastly, using an existing IC threat assessment toolchain, we show T-TER defeats even the stealthiest known hardware Trojan, with ≈ 1% overhead.","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125209964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","authors":"","doi":"10.1145/3579856","DOIUrl":"https://doi.org/10.1145/3579856","url":null,"abstract":"","PeriodicalId":156082,"journal":{"name":"Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115255018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}