{"title":"Adversarial Detection of Censorship Measurements","authors":"Abderrahmen Amich, Birhanu Eshete, V. Yegneswaran","doi":"10.1145/3559613.3563203","DOIUrl":"https://doi.org/10.1145/3559613.3563203","url":null,"abstract":"The arms race between Internet freedom technologists and censoring regimes has catalyzed the deployment of more sophisticated censoring techniques and directed significant research emphasis toward the development of automated tools for censorship measurement and evasion. We highlight Geneva as one of the recent advances in this area. By training a genetic algorithm such as Geneva inside a censored region, we can automatically find novel packet-manipulation-based censorship evasion strategies. In this paper, we explore the resilience of Geneva in the face of censors that actively detect and react to Geneva's measurements. Specifically, we develop machine learning (ML)-based classifiers and leverage a popular hypothesis-testing algorithm that can be deployed at the censor to detect Geneva clients within two to seven flows, i.e., far before Geneva finds any working evasion strategy. We further use public packet-capture traces to show that Geneva flows can be easily distinguished from normal flows and other malicious flows (e.g., network forensics, malware). Finally, we discuss some potential research directions to mitigate Geneva's detection.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130556701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rasmus Dahlberg, P. Syverson, Linus Nordberg, M. Finkel
{"title":"Sauteed Onions: Transparent Associations from Domain Names to Onion Addresses","authors":"Rasmus Dahlberg, P. Syverson, Linus Nordberg, M. Finkel","doi":"10.1145/3559613.3563208","DOIUrl":"https://doi.org/10.1145/3559613.3563208","url":null,"abstract":"Onion addresses offer valuable features such as lookup and routing security, self-authenticated connections, and censorship resistance. Therefore, many websites are also available as onionsites in Tor. The way registered domains and onion addresses are associated is however a weak link. We introduce sauteed onions, transparent associations from domain names to onion addresses. Our approach relies on TLS certificates to establish onion associations. It is much like today's onion location which relies on Certificate Authorities (CAs) due to its HTTPS requirement, but has the added benefit of becoming public for everyone to see in Certificate Transparency (CT) logs. We propose and prototype two uses of sauteed onions: certificate-based onion location and search engines that use CT logs as the underlying database. The achieved goals are consistency of available onion associations, which mitigates attacks where users are partitioned depending on which onion addresses they are given, forward censorship-resistance after a TLS site has been configured once, and improved third-party discovery of onion associations, which requires less trust while easily scaling to all onionsites that opt-in.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130889829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Splitting Hairs and Network Traces: Improved Attacks Against Traffic Splitting as a Website Fingerprinting Defense","authors":"Matthias Beckerle, Jonathan Magnusson, T. Pulls","doi":"10.1145/3559613.3563199","DOIUrl":"https://doi.org/10.1145/3559613.3563199","url":null,"abstract":"The widespread use of encryption and anonymization technologies---e.g., HTTPS, VPNs, Tor, and iCloud Private Relay---makes network attackers likely to resort to traffic analysis to learn of client activity. For web traffic, such analysis of encrypted traffic is referred to as Website Fingerprinting (WF). WF attacks have improved greatly in large parts thanks to advancements in Deep Learning (DL). In 2019, a new category of defenses was proposed: traffic splitting, where traffic from the client is split over two or more network paths with the assumption that some paths are unobservable by the attacker. In this paper, we take a look at three recently proposed defenses based on traffic splitting: HyWF, CoMPS, and TrafficSliver BWR5. We analyze real-world and simulated datasets for all three defenses to better understand their splitting strategies and effectiveness as defenses. Using our improved DL attack Maturesc on real-world datasets, we improve the classification accuracy wrt. state-of-the-art from 49.2% to 66.7% for HyWF, the F1 score from 32.9% to 72.4% for CoMPS, and the accuracy from 8.07% to 53.8% for TrafficSliver BWR5. We find that a majority of wrongly classified traces contain less than a couple hundred of packets/cells: e.g., in every dataset 25% of traces contain less than 155 packets. What cannot be observed cannot be classified. Our results show that the proposed traffic splitting defenses on average provide less protection against WF attacks than simply randomly selecting one path and sending all traffic over that path.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130512366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malte Breuer, Andreas Klinger, T. Schneider, Ulrike Meyer
{"title":"Secure Maximum Weight Matching Approximation on General Graphs","authors":"Malte Breuer, Andreas Klinger, T. Schneider, Ulrike Meyer","doi":"10.1145/3559613.3563209","DOIUrl":"https://doi.org/10.1145/3559613.3563209","url":null,"abstract":"Privacy-preserving protocols for matchings on general graphs can be used for applications such as online dating, bartering, or kidney donor exchange. In addition, they can act as a building block for more complex protocols. While privacy-preserving protocols for matchings on bipartite graphs are a well-researched topic, the case of general graphs has experienced significantly less attention so far. We address this gap by providing the first privacy-preserving protocol for maximum weight matching on general graphs. To maximize the scalability of our approach, we compute an 1/2-approximation instead of an exact solution. For N nodes, our protocol requires O(N log N) rounds, O(N^3) communication, and runs in only 12.5 minutes for N=400.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131399394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Victor Morel, C. Santos, Yvonne Lintao, Soheil Human
{"title":"Your Consent Is Worth 75 Euros A Year - Measurement and Lawfulness of Cookie Paywalls","authors":"Victor Morel, C. Santos, Yvonne Lintao, Soheil Human","doi":"10.1145/3559613.3563205","DOIUrl":"https://doi.org/10.1145/3559613.3563205","url":null,"abstract":"Most websites offer their content for free, though this gratuity often comes with a counterpart: personal data is collected to finance these websites by resorting, mostly, to tracking and thus targeted advertising. Cookie walls and paywalls, used to retrieve consent, recently generated interest from EU DPAs and seemed to have grown in popularity. However, they have been overlooked by scholars. We present in this paper 1) the results of an exploratory study conducted on 2800 Central European websites to measure the presence and practices of cookie paywalls, and 2) a framing of their lawfulness amidst the variety of legal decisions and guidelines.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121451503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Padding-only Defenses Add Delay in Tor","authors":"Ethan Witwer, James K. Holland, Nicholas Hopper","doi":"10.1145/3559613.3563207","DOIUrl":"https://doi.org/10.1145/3559613.3563207","url":null,"abstract":"Website fingerprinting is an attack that uses size and timing characteristics of encrypted downloads to identify targeted websites. Since this can defeat the privacy goals of anonymity networks such as Tor, many algorithms to defend against this attack in Tor have been proposed in the literature. These algorithms typically consist of some combination of the injection of dummy \"padding'' packets with the delay of actual packets to disrupt timing patterns. For usability reasons, Tor is intended to provide low latency; as such, many authors focus on padding-only defenses in the belief that they are \"zero-delay.'' We demonstrate through Shadow simulations that by increasing queue lengths, padding-only defenses add delay when deployed network-wide, so they should not be considered \"zero-delay.'' We further argue that future defenses should also be evaluated using network-wide deployment simulations.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122648304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of Encrypted IoT Traffic despite Padding and Shaping","authors":"Aviv Engelberg, A. Wool","doi":"10.1145/3559613.3563191","DOIUrl":"https://doi.org/10.1145/3559613.3563191","url":null,"abstract":"It is well-known that when IoT traffic is unencrypted it is possible to identify the active devices based on their TCP/IP headers. And when traffic is encrypted, packet-sizes and timings can still be used to do so. To defend against such fingerprinting, traffic padding and shaping were introduced. In this paper we show that even with these mitigations, the privacy of IoT consumers can still be violated. The main tool we use in our analysis is the full distribution of packet-size---as opposed to commonly used statistics such as mean and variance. We evaluate the performance of a local adversary, such as a snooping neighbor or a criminal, against 8~different padding methods. We show that our classifiers achieve perfect (100% accuracy) classification using the full packet-size distribution for low-overhead methods, whereas prior works that rely on statistical metadata achieved lower rates even when no padding and shaping were used. We also achieve an excellent classification rate even against high-overhead methods. We further show how an external adversary such as a malicious ISP or a government intelligence agency, who only sees the padded and shaped traffic as it goes through a VPN, can accurately identify the subset of active devices with Recall and Precision of at least 96%. Finally, we also propose a new method of padding we call the Dynamic STP (DSTP) that incurs significantly less per-packet overhead compared to other padding methods we tested and guarantees more privacy to IoT consumers.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SplitGuard: Detecting and Mitigating Training-Hijacking Attacks in Split Learning","authors":"Ege Erdogan, Alptekin Kupcu, A. E. Cicek","doi":"10.1145/3559613.3563198","DOIUrl":"https://doi.org/10.1145/3559613.3563198","url":null,"abstract":"Distributed deep learning frameworks such as split learning provide great benefits with regards to the computational cost of training deep neural networks and the privacy-aware utilization of the collective data of a group of data-holders. Split learning, in particular, achieves this goal by dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to steal the client's private data: the server can direct the client model towards learning any task of its choice, e.g. towards outputting easily invertible values. With a concrete example already proposed (Pasquini et al., CCS '21), such training-hijacking attacks present a significant risk for the data privacy of split learning clients. In this paper, we propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not. We experimentally evaluate our method's effectiveness, compare it with potential alternatives, and discuss in detail various points related to its use. We conclude that SplitGuard can effectively detect training-hijacking attacks while minimizing the amount of information recovered by the adversaries.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128282008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks against Split Learning","authors":"Ege Erdogan, Alptekin Kupcu, A. E. Cicek","doi":"10.1145/3559613.3563201","DOIUrl":"https://doi.org/10.1145/3559613.3563201","url":null,"abstract":"Training deep neural networks often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning aims to address this concern by distributing the model among a client and a server. The scheme supposedly provides privacy, since the server cannot see the clients' models and inputs. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and obtain a functionally similar model to the client model, without being detected. (2) We show that if the client keeps hidden only the output layer of the model to ''protect'' the private labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using various benchmark datasets and against proposed privacy-enhancing extensions to split learning. Our results show that plaintext split learning can pose serious risks, ranging from data (input) privacy to intellectual property (model parameters), and provide no more than a false sense of security.","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124821476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","authors":"","doi":"10.1145/3559613","DOIUrl":"https://doi.org/10.1145/3559613","url":null,"abstract":"","PeriodicalId":416548,"journal":{"name":"Proceedings of the 21st Workshop on Privacy in the Electronic Society","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122059059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}