N. Samarin, Alex Sanchez, Trinity Chung, Akshay Dan Bhavish Juleemun, Conor Gilsenan, Nick Merrill, Joel Reardon, Serge Egelman
{"title":"The Medium is the Message: How Secure Messaging Apps Leak Sensitive Data to Push Notification Services","authors":"N. Samarin, Alex Sanchez, Trinity Chung, Akshay Dan Bhavish Juleemun, Conor Gilsenan, Nick Merrill, Joel Reardon, Serge Egelman","doi":"10.56553/popets-2024-0151","DOIUrl":"https://doi.org/10.56553/popets-2024-0151","url":null,"abstract":"Like most modern software, secure messaging apps rely on thirdparty components to implement important app functionality. Although this practice reduces engineering costs, it also introduces the risk of inadvertent privacy breaches due to misconfiguration errors or incomplete documentation. Our research investigated secure messaging apps' usage of Google's Firebase Cloud Messaging (FCM) service to send push notifications to Android devices. We analyzed 21 popular secure messaging apps from the Google Play Store to determine what personal information these apps leak in the payload of push notifications sent via FCM. Of these apps, 11 leaked metadata, including user identifiers (10 apps), sender or recipient names (7 apps), and phone numbers (2 apps), while 4 apps leaked the actual message content. Furthermore, none of the data we observed being leaked to FCM was specifically disclosed in those apps' privacy disclosures. We also found several apps employing strategies to mitigate this privacy leakage to FCM, with varying levels of success. Of the strategies we identified, none appeared to be common, shared, or well-supported. We argue that this is fundamentally an economics problem: incentives need to be correctly aligned to motivate platforms and SDK providers to make their systems secure and private by default.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"51 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141644702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Generation of Web Censorship Probe Lists","authors":"Jenny Tang, Léo Alvarez, Arjun Brar, Nguyen Phong Hoang, Nicolas Christin","doi":"10.56553/popets-2024-0106","DOIUrl":"https://doi.org/10.56553/popets-2024-0106","url":null,"abstract":"Domain probe lists---used to determine which URLs to probe for Web censorship---play a critical role in Internet censorship measurement studies. Indeed, the size and accuracy of the domain probe list limits the set of censored pages that can be detected; inaccurate lists can lead to an incomplete view of the censorship landscape or biased results. Previous efforts to generate domain probe lists have been mostly manual or crowdsourced. This approach is time-consuming, prone to errors, and does not scale well to the ever-changing censorship landscape. In this paper, we explore methods for automatically generating probe lists that are both comprehensive and up-to-date for Web censorship measurement. We start from an initial set of 139,957 unique URLs from various existing test lists consisting of pages from a variety of languages to generate new candidate pages. By analyzing content from these URLs (i.e., performing topic and keyword extraction), expanding these topics, and using them as a feed to search engines, our method produces 119,255 new URLs across 35,147 domains. We then test the new candidate pages by attempting to access each URL from servers in eleven different global locations over a span of four months to check for their connectivity and potential signs of censorship. Our measurements reveal that our method discovered over 1,400 domains---not present in the original dataset---we suspect to be blocked. In short, automatically updating probe lists is possible, and can help further automate censorship measurements at scale.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"55 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141655674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Design Opportunities for Family-Based Privacy Education in Informal Learning Spaces","authors":"Lanjing Liu, Lan Gao, Nikita Soni, Yaxing Yao","doi":"10.56553/popets-2024-0071","DOIUrl":"https://doi.org/10.56553/popets-2024-0071","url":null,"abstract":"Children face increasing privacy risks and the need to navigate complex choices, while privacy education is not sufficient due to limited education scope and family involvement. We advocate for informal learning spaces (ILS) as a pioneering channel for family-based privacy education, given their established role in holistic technology and digital literacy education, which specifically targets family groups. In this paper, we conducted an interview study with eight families to understand revealing current approaches to privacy education and engagement with ILS for family-based learning. Our findings highlight ILS’s transformative potential in family privacy education, considering existing practices and challenges. We discuss the design opportunities for family-based privacy education in ILS, covering goals, content, engagement, and experience design. These insights contribute to future research on family-based privacy education in ILS.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"31 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141712800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GCL-Leak: Link Membership Inference Attacks against Graph Contrastive Learning","authors":"Xiuling Wang, Wendy Hui Wang","doi":"10.56553/popets-2024-0073","DOIUrl":"https://doi.org/10.56553/popets-2024-0073","url":null,"abstract":"Graph contrastive learning (GCL) has emerged as a successful method for self-supervised graph learning. It involves generating augmented views of a graph by augmenting its edges and aims to learn node embeddings that are invariant to graph augmentation. Despite its effectiveness, the potential privacy risks associated with GCL models have not been thoroughly explored. In this paper, we delve into the privacy vulnerability of GCL models through the lens of link membership inference attacks (LMIA). Specifically, we focus on the federated setting where the adversary has white-box access to the node embeddings of all the augmented views generated by the target GCL model. Designing such white-box LMIAs against GCL models presents a significant and unique challenge due to potential variations in link memberships among node pairs in the target graph and its augmented views. This variability renders members indistinguishable from non-members when relying solely on the similarity of their node embeddings in the augmented views. To address this challenge, our in-depth analysis reveals that the key distinguishing factor lies in the similarity of node embeddings within augmented views where the node pairs share identical link memberships as those in the training graph. However, this poses a second challenge, as information about whether a node pair has identical link membership in both the training graph and augmented views is only available during the attack training phase. This demands the attack classifier to handle the additional “identical-membership\" information which is available only for training and not for testing. To overcome this challenge, we propose GCL-LEAK, the first link membership inference attack against GCL models. The key component of GCL-LEAK is a new attack classifier model designed under the “Learning Using Privileged Information (LUPI)” paradigm, where the privileged information of “same-membership” is encoded as part of the attack classifier's structure. Our extensive set of experiments on four representative GCL models showcases the effectiveness of GCL-LEAK. Additionally, we develop two defense mechanisms that introduce perturbation to the node embeddings. Our empirical evaluation demonstrates that both defense mechanisms significantly reduce attack accuracy while preserving the accuracy of GCL models.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"74 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141697701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Heinrich, Leon Würsching, Matthias Hollick
{"title":"Please Unstalk Me: Understanding Stalking with Bluetooth Trackers and Democratizing Anti-Stalking Protection","authors":"Alexander Heinrich, Leon Würsching, Matthias Hollick","doi":"10.56553/popets-2024-0082","DOIUrl":"https://doi.org/10.56553/popets-2024-0082","url":null,"abstract":"While designed to locate lost items, Bluetooth trackers are increasingly exploited for malign purposes, such as unwanted location tracking. This study probes deeper into this issue, focusing on the widespread use of these devices for stalking. Following a dual approach, we analyzed user data from a widely used tracking detection app (over 200,000 active installations) and conducted a comprehensive online survey (N=5,253). Our data analysis reveals a significant prevalence of trackers from major brands such as Apple, Tile, and Samsung. The user data also shows that the app sends about 1,400 alarms daily for unwanted tracking. Survey insights reveal that 44.28% of stalking victims had been subjected to location tracking, with cars emerging as the most common hideout for misused trackers, followed by backpacks and purses. These findings underscore the urgency for more robust solutions. Despite ongoing efforts by manufacturers and researchers, the misuse of Bluetooth trackers remains a significant concern. We advocate for developing more effective tracking detection mechanisms integrated into smartphones by default and creating supportive measures for individuals without smartphone access.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"42 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141712183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charlotte Moremen, Jordan Hoogsteden, Eleanor Birrell
{"title":"Generational Differences in Understandings of Privacy Terminology","authors":"Charlotte Moremen, Jordan Hoogsteden, Eleanor Birrell","doi":"10.56553/popets-2024-0094","DOIUrl":"https://doi.org/10.56553/popets-2024-0094","url":null,"abstract":"Prior work has consistently found that people have miscomprehensions and misunderstandings about technical terms. However, that work has exclusively studied general populations, usually recruited online. This work investigates the relationship between generational cohorts and their understandings of privacy terms, specifically cohorts of elementary school children (aged 10-11), young adults (aged 18-23), and retired adults (aged 73-92), all recruited offline. We surveyed participants about their understanding of and confidence with technical terms that commonly appear in privacy policies. We then moderated a post-survey focus group with each generational cohort in which participants discussed their reactions to the actual definitions along with their experience with technical privacy terms. We found that young adults had better understandings of technical terms than the other generations, despite all generations reporting being regular Internet users. Participants across all generational cohorts discussed themes of confusion and frustration with technical terms, and older adults particularly reported a sense of being left behind. Our results reinforce the need for improvement in the presentation of information about data use practices. Our results also demonstrate the need for more focused research and attention on the youngest and oldest members of society and their use of the Internet and technology.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"45 45","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141690056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FlashSwift: A Configurable and More Efficient Range Proof With Transparent Setup","authors":"Nan Wang, Dongxi Liu","doi":"10.56553/popets-2024-0067","DOIUrl":"https://doi.org/10.56553/popets-2024-0067","url":null,"abstract":"Bit-decomposition-based zero-knowledge range proofs in the discrete logarithm (DLOG) setting with a transparent setup, e.g., Bulletproof (IEEE S&P 18), Flashproof (ASIACRYPT 22), and SwiftRange (IEEE S&P 24), have garnered widespread popularity across various privacy-enhancing applications. These proofs aim to prove that a committed value falls within the non-negative range [0, 2^N-1] without revealing it, where N represents the bit length of the range. Despite their prevalence, the current implementations still suffer from suboptimal performance. Some exhibit reduced communication costs at the expense of increased computational costs while others experience the opposite. Presently, users are compelled to utilize these proofs in scenarios demanding stringent requirements for both communication and computation efficiency.\u0000\u0000In this paper, we introduce, FlashSwift, a stronger DLOG-based logarithmic-sized alternative. It stands out for its greater shortness and significantly enhanced computational efficiency compared with the cutting-edge logarithmic-sized ones for the most common ranges where N is no more than 64. It is developed by integrating the techniques from Flashproof and SwiftRange without using a trusted setup. The substantial efficiency gains stem from our dedicated efforts in overcoming the inherent incompatibility barrier between the two techniques. Specifically, when N=64, our proof achieves the same size as Bulletproof and exhibits 1.1 times communication efficiency of SwiftRange. More importantly, compared with the two, it achieves 2.3 times and 1.65 times proving efficiency, and 3.2 times and 1.7 times verification efficiency, respectively. At the time of writing, our proof also creates two new records of the smallest proof sizes, 289 bytes and 417 bytes, for 8-bit and 16-bit ranges among all the bit-decomposition-based ones without requiring trusted setups. Moreover, to the best of our knowledge, it is the first configurable range proof that is adaptable to various scenarios with different specifications, where the configurability allows to trade off communication efficiency for computational efficiency. In addition, we offer a bonus feature: FlashSwift supports the aggregation of multiple single proofs for efficiency improvement. Finally, we provide comprehensive performance benchmarks against the state-of-the-art ones to demonstrate its practicality.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"81 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141701583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Mixon-Baca, Jeffrey Knockel, Diwen Xue, Tarun Ayyagari, Deepak Kapur, Roya Ensafi, Jedidiah R. Crandall
{"title":"Attacking Connection Tracking Frameworks as used by Virtual Private Networks","authors":"Benjamin Mixon-Baca, Jeffrey Knockel, Diwen Xue, Tarun Ayyagari, Deepak Kapur, Roya Ensafi, Jedidiah R. Crandall","doi":"10.56553/popets-2024-0070","DOIUrl":"https://doi.org/10.56553/popets-2024-0070","url":null,"abstract":"VPNs (Virtual Private Networks) have become an essential privacy-enhancing technology, particularly for at-risk users like dissidents, journalists, NGOs, and others vulnerable to targeted threats. While previous research investigating VPN security has focused on cryptographic strength or traffic leakages, there remains a gap in understanding how lower-level primitives fundamental to VPN operations, like connection tracking, might undermine the security and privacy that VPNs are intended to provide.\u0000In this paper, we examine the connection tracking frameworks used in common operating systems, identifying a novel exploit primitive that we refer to as the port shadow. We use the port shadow to build four attacks against VPNs that allow an attacker to intercept and redirect encrypted traffic, de-anonymize a VPN peer, or even portscan a VPN peer behind the VPN server. We build a formal model of modern connection tracking frameworks and identify that the root cause of the port shadow lies in five shared, limited resources. Through bounded model checking, we propose and verify six mitigations in terms of enforcing process isolation. We hope our work leads to more attention on the security aspects of lower-level systems and the implications of integrating them into security-critical applications.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"18 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Abdelwahab Gaballah, Lamya Abdullah, Mina Alishahi, Thanh Hoang Long Nguyen, Ephraim Zimmer, Max Mühlhäuser, Karola Marky
{"title":"Anonify: Decentralized Dual-level Anonymity for Medical Data Donation","authors":"Sarah Abdelwahab Gaballah, Lamya Abdullah, Mina Alishahi, Thanh Hoang Long Nguyen, Ephraim Zimmer, Max Mühlhäuser, Karola Marky","doi":"10.56553/popets-2024-0069","DOIUrl":"https://doi.org/10.56553/popets-2024-0069","url":null,"abstract":"Medical data donation involves voluntarily sharing medical data with research institutions, which is crucial for advancing healthcare research. However, the sensitive nature of medical data poses privacy and security challenges. The primary concern is the risk of de-anonymization, where users can be linked to their donated data through background knowledge or communication metadata. In this paper, we introduce Anonify, a decentralized anonymity protocol offering strong user protection during data donation without reliance on a single entity. It achieves dual-level anonymity protection, covering both communication and data aspects by leveraging Distributed Point Functions, and incorporating k-anonymity and stratified sampling within a secret-sharing-based setting. Anonify ensures that the donated data is in a form that affords flexibility for researchers in their analyses. Our evaluation demonstrates the efficiency of Anonify in preserving privacy and optimizing data utility. Furthermore, the performance of machine learning algorithms on the anonymized datasets generated by the protocol shows high accuracy and precision.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141691429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compact Issuer-Hiding Authentication, Application to Anonymous Credential","authors":"Olivier Sanders, Jacques Traoré","doi":"10.56553/popets-2024-0097","DOIUrl":"https://doi.org/10.56553/popets-2024-0097","url":null,"abstract":"Anonymous credentials are cryptographic mechanisms enabling users to authenticate themselves with a fine-grained control on the information they leak in the process. They have been the topic of countless papers which have improved the performance of such mechanisms or proposed new schemes able to prove ever-more complex statements about the attributes certified by those credentials. However, although these papers have studied in depth the problem of the information leaked by the credential and/or the attributes, almost all of them have surprisingly overlooked the information one may infer from the knowledge of the credential issuer. In this paper we address this problem by showing how one can efficiently hide the actual issuer of a credential within a set of potential issuers. The novelty of our work is that we do not resort to zero-knowledge proofs but instead we show how one can tweak Pointcheval-Sanders signatures to achieve this issuer-hiding property in a compact way. This results in an efficient anonymous credential system that indeed provides a complete control of the information leaked in the authentication process. Our construction is moreover modular and can then fit a wide spectrum of applications, notably for Self-Sovereign Identity (SSI) systems.","PeriodicalId":519525,"journal":{"name":"Proceedings on Privacy Enhancing Technologies","volume":"45 192","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}