Yudi Yang, Weijie Huang, Kelly Kaoudis, Nathan Dautenhahn
{"title":"Whole-Program Privilege and Compartmentalization Analysis with the Object-Encapsulation Model","authors":"Yudi Yang, Weijie Huang, Kelly Kaoudis, Nathan Dautenhahn","doi":"10.1109/SPW59333.2023.00018","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00018","url":null,"abstract":"We present the object-encapsulation model, a low-level program representation and analysis framework that exposes and quantifies privilege within a program. Successfully compartmentalizing an application today requires significant expertise, but is an attractive goal as it reduces connectability of attack vectors in exploit chains. The object-encapsulation model enables understanding how a program can best be compartmentalized without requiring deep knowledge of program internals. We translate a program to a new representation, the Program Capability Graph (PCG), mapping each operation to the code and data objects it may access. We aggregate PCG elements into encapsulated-object groups. The resulting encapsulated-objects PCG enables measuring program interconnectedness and encapsulated-object privileges in order to explore and compare compartmentalization strategies. Our deep dive of parsers reveals they are well encapsulated, requiring access to an average of 545/4902 callable interfaces and 1201/29198 external objects. This means the parsers we evaluate can be easily compartmentalized, applying the encapsulated-objects PCG and our analysis to facilitate automatic or manual trust boundary placement. Overall, the object-encapsulation model provides an essential element to language-level analysis of least-privilege in complex systems to aid codebase understanding and refactoring.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126070329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Lo, Ningyu He, Yuejie Shi, Jiajia Xu, Chiachih Wu, Ding Li, Yao Guo
{"title":"Fuzzing the Latest NTFS in Linux with Papora: An Empirical Study","authors":"E. Lo, Ningyu He, Yuejie Shi, Jiajia Xu, Chiachih Wu, Ding Li, Yao Guo","doi":"10.1109/SPW59333.2023.00034","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00034","url":null,"abstract":"Recently, the first feature-rich NTFS implementation, NTFS3, has been upstreamed to Linux. Although ensuring the security of NTFS3 is essential for the future of Linux, it remains unclear, however, whether the most recent version of NTFS for Linux contains 0-day vulnerabilities. To this end, we implemented Papora, the first effective fuzzer for NTFS3. We have identified and reported 3 CVE-assigned 0-day vulnerabilities and 9 severe bugs in NTFS3. Furthermore, we have investigated the underlying causes as well as types of these vulnerabilities and bugs. We have conducted an empirical study on the identified bugs while the results of our study have offered practical insights regarding the security of NTFS3 in Linux.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"634 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113996319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Feasibility of Server-side Backdoor Attacks on Split Learning","authors":"B. Tajalli, O. Ersoy, S. Picek","doi":"10.1109/SPW59333.2023.00014","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00014","url":null,"abstract":"Split learning is a collaborative learning design that allows several participants (clients) to train a shared model while keeping their datasets private. In split learning, the network is split into two halves: clients have the initial part until the cut layer, and the remaining part of the network is on the server side. In the training process, clients feed the data into the first part of the network and send the output (smashed data) to the server, which uses it as the input for the remaining part of the network. Recent studies demonstrate that collaborative learning models, specifically federated learning, are vulnerable to security and privacy attacks such as model inference and backdoor attacks. While there have been studies regarding inference attacks on split learning, it has not yet been tested for backdoor attacks. This paper performs a novel backdoor attack on split learning and studies its effectiveness. Despite traditional backdoor attacks done on the client side, we inject the backdoor trigger from the server side. We provide two attack methods: one using a surrogate client and another using an autoencoder to poison the model via incoming smashed data and its outgoing gradient toward the innocent participants. The results show that despite using strong patterns and injection methods, split learning is highly robust and resistant to such poisoning attacks. While we get the attack success rate of 100% as our best result for the MNIST dataset, in most of the other cases, our attack shows little success when increasing the cut layer.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122933796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised clustering of file dialects according to monotonic decompositions of mixtures","authors":"Michael Robinson, Tate Altman, Denley Lam, Le Li","doi":"10.1109/SPW59333.2023.00019","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00019","url":null,"abstract":"This paper proposes an unsupervised classification method that partitions a set of files into non-overlapping dialects based upon their behaviors, determined by messages produced by a collection of programs that consume them. The pattern of messages can be used as the signature of a particular kind of behavior, with the understanding that some messages are likely to co-occur, while others are not. We propose a novel definition for a file format dialect, based upon these behavioral signatures. A dialect defines a subset of the possible messages, called the required messages. Once files are conditioned upon a dialect and its required messages, the remaining messages are statistically independent. With this definition in hand, we present a greedy algorithm that deduces candidate dialects from a dataset consisting of a matrix of file-message data, demonstrate its performance on several file formats, and prove conditions under which it is optimal. We show that an analyst needs to consider fewer dialects than distinct message patterns, which reduces their cognitive load when studying a complex format.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124205653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Membership Inference Attacks against Diffusion Models","authors":"Tomoya Matsumoto, Takayuki Miura, Naoto Yanai","doi":"10.1109/SPW59333.2023.00013","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00013","url":null,"abstract":"Diffusion models have attracted attention in recent years as innovative generative models. In this paper, we investigate whether a diffusion model is resistant to a membership inference attack, which evaluates the privacy leakage of a machine learning model. We primarily discuss the diffusion model from the standpoints of comparison with a generative adversarial network (GAN) as conventional models and hyperparameters unique to the diffusion model, i.e., timesteps, sampling steps, and sampling variances. We conduct extensive experiments with DDIM as a diffusion model and DCGAN as a GAN on the CelebA and CIFAR-10 datasets in both white-box and black-box settings and then show that the diffusion model is comparably resistant to a membership inference attack as GAN. Next, we demonstrate that the impact of timesteps is significant and intermediate steps in a noise schedule are the most vulnerable to the attack. We also found two key insights through further analysis. First, we identify that DDIM is vulnerable to the attack for small sample sizes instead of achieving a lower FID. Second, sampling steps in hyperparameters are important for resistance to the attack, whereas the impact of sampling variances is quite limited.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130154866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giulio De Pasquale, Fukutomo Nakanishi, Daniele Ferla, L. Cavallaro
{"title":"ROPfuscator: Robust Obfuscation with ROP","authors":"Giulio De Pasquale, Fukutomo Nakanishi, Daniele Ferla, L. Cavallaro","doi":"10.1109/SPW59333.2023.00026","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00026","url":null,"abstract":"Software obfuscation is crucial in protecting intellectual property in software from reverse engineering attempts. While some obfuscation techniques originate from the obfuscation-reverse engineering arms race, others stem from different research areas, such as binary software exploitation. Return-oriented programming (ROP) became one of the most effective exploitation techniques for memory error vulnerabilities. ROP interferes with our natural perception of a process control flow, inspiring us to repurpose ROP as a robust and effective form of software obfuscation. Although previous work already explores ROP's effectiveness as an obfuscation technique, evolving reverse engineering research raises the need for principled reasoning to understand the strengths and limitations of ROP-based mechanisms against man-at-the-end (MATE) attacks. To this end, we present ROPFuscator, a compiler-driven obfuscation pass based on ROP for any programming language supported by LLVM. We incorporate opaque predicates and constants and a novel instruction hiding technique to withstand sophisticated MATE attacks. More importantly, we introduce a realistic and unified threat model to thoroughly evaluate ROPFuscator and provide principled reasoning on ROP-based obfuscation techniques that answer to code coverage, incurred overhead, correctness, robustness, and practicality challenges. The project's source code is published online to aid further research.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122534556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}