Mathias Humbert, Erman Ayday, J. Hubaux, A. Telenti
{"title":"Reconciling Utility with Privacy in Genomics","authors":"Mathias Humbert, Erman Ayday, J. Hubaux, A. Telenti","doi":"10.1145/2665943.2665945","DOIUrl":"https://doi.org/10.1145/2665943.2665945","url":null,"abstract":"Direct-to-consumer genetic testing makes it possible for everyone to learn their genome sequences. In order to contribute to medical research, a growing number of people publish their genomic data on the Web, sometimes under their real identities. However, this is at odds not only with their own privacy but also with the privacy of their relatives. The genomes of relatives being highly correlated, some family members might be opposed to revealing any of the family's genomic data. In this paper, we study the trade-off between utility and privacy in genomics. We focus on the most relevant kind of variants, namely single nucleotide polymorphisms (SNPs). We take into account the fact that the SNPs of an individual contain information about the SNPs of his family members and that SNPs are correlated with each other. Furthermore, we assume that SNPs can have different utilities in medical research and different levels of sensitivity for individuals. We propose an obfuscation mechanism that enables the genomic data to be publicly available for research, while protecting the genomic privacy of the individuals in a family. Our genomic-privacy preserving mechanism relies upon combinatorial optimization and graphical models to optimize utility and meet privacy requirements. We also present an extension of the optimization algorithm to cope with the non-linear constraints induced by the correlations between SNPs. Our results on real data show that our proposed technique maximizes the utility for genomic research and satisfies family members' privacy constraints.","PeriodicalId":408627,"journal":{"name":"Proceedings of the 13th Workshop on Privacy in the Electronic Society","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122056741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying Webbrowsers in Encrypted Communications","authors":"Jiangmin Yu, Eric Chan-Tin","doi":"10.1145/2665943.2665968","DOIUrl":"https://doi.org/10.1145/2665943.2665968","url":null,"abstract":"Webbrowser fingerprinting is a powerful tool to identify an Internet end-user. Previous research has shown that the information extracted from webbrowsers can uniquely identify an end-user. To collect webbrowser specific information, intentional JavaScript codes are embedded in web pages. In this paper, we show that fingerprinting characteristics of a webbrowser can also be collected by solely checking the network traffic data generated when browsing a website. We collect network traffic data generated by browsing the homepage of the most popular websites. Based on this data, we show that the browser fingerprinting characteristics can be inferred with high accuracy. Among these characteristics, type of webbrowser can be identified with over 70% accuracy rate. Usage status of popular plug-ins like JavaScript and flash can also be accurately identified.","PeriodicalId":408627,"journal":{"name":"Proceedings of the 13th Workshop on Privacy in the Electronic Society","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124790800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting Users' Inconsistent Preferences in Online Social Networks to Discover Private Friendship Links","authors":"Lei Jin, Hassan Takabi, Xuelian Long, J. Joshi","doi":"10.1145/2665943.2665956","DOIUrl":"https://doi.org/10.1145/2665943.2665956","url":null,"abstract":"In a social network system, a friendship relation between two users is usually represented by an undirected link and it is visible in both users' friend lists. Such a dual visibility of a friendship link may raise privacy threats. This is because both the users of a friendship link can separately control its visibility to other users and their preferences of sharing such a friendship link may not be consistent. Even if one of them conceals the friendship link from a third user, that third user may find the link through the other user's friend list. In addition, as most social network users allow their friends to see their friend lists, an adversary can exploit these inconsistent policies caused by users' conflicting preferences to identify and infer many of a targeted user's friends and even reconstruct the topology of an entire social network. In this paper, we propose, characterize and evaluate such an attack referred as the Friendship Identification and Inference (FII) attack. In an FII attack scenario, an adversary first accumulates the initial attack relevant information based on the friend lists visible to him in a social network. Then, he utilizes this information to identify and infer a target's friends using a random walk based approach. We formally define the attack and present the attack steps, the attack algorithm and various attack schemes. Our experimental results using three real social network datasets show that FII attacks are effective in inferring private friendship links of a target and predicting the topology of the social network. Currently, most popular social network systems, such as Facebook, LinkedIn and Foursquare, are susceptible to FII attacks.","PeriodicalId":408627,"journal":{"name":"Proceedings of the 13th Workshop on Privacy in the Electronic Society","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123554607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FARB: Fast Anonymous Reputation-Based Blacklisting without TTPs","authors":"Li Xi, D. Feng","doi":"10.1145/2665943.2665947","DOIUrl":"https://doi.org/10.1145/2665943.2665947","url":null,"abstract":"Anonymous blacklisting schemes that do not rely on trusted third parties (TTPs) are desirable as they can block misbehaving users while protecting user privacy. Recent TTP-free schemes such as BLACR and PERM present reputation-based blacklisting, for which the service provider (SP) can assign positive or negative scores to anonymous sessions and block users whose reputations are not high enough. Though being the state of the art in anonymous blacklisting, these schemes are heavyweight and only able to support tens of authentications per minute in practical settings. We present FARB, the first reputation-based blacklisting scheme which has constant computational complexity both on the SP and user side. FARB thus supports a reputation list with billions of entries and is efficient enough for heavy-loaded SPs with thousands of authentications per minute. On the user side, FARB is fast enough even for mobile devices and supports flexible rate-limiting. We also present a novel fine-grained weighted extension which allows the SP to ramp up penalties for repeated misbehaviors according to the severity of the misbehaving user's past sessions.","PeriodicalId":408627,"journal":{"name":"Proceedings of the 13th Workshop on Privacy in the Electronic Society","volume":"183 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120842487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Theodorakopoulos, R. Shokri, C. Troncoso, J. Hubaux, J. Boudec
{"title":"Prolonging the Hide-and-Seek Game: Optimal Trajectory Privacy for Location-Based Services","authors":"George Theodorakopoulos, R. Shokri, C. Troncoso, J. Hubaux, J. Boudec","doi":"10.1145/2665943.2665946","DOIUrl":"https://doi.org/10.1145/2665943.2665946","url":null,"abstract":"Human mobility is highly predictable. Individuals tend to only visit a few locations with high frequency, and to move among them in a certain sequence reflecting their habits and daily routine. This predictability has to be taken into account in the design of location privacy preserving mechanisms (LPPMs) in order to effectively protect users when they expose their whereabouts to location-based services (LBSs) continuously. In this paper, we describe a method for creating LPPMs tailored to a user's mobility profile taking into her account privacy and quality of service requirements. By construction, our LPPMs take into account the sequential correlation across the user's exposed locations, providing the maximum possible trajectory privacy, i.e., privacy for the user's past, present location, and expected future locations. Moreover, our LPPMs are optimal against a strategic adversary, i.e., an attacker that implements the strongest inference attack knowing both the LPPM operation and the user's mobility profile. The optimality of the LPPMs in the context of trajectory privacy is a novel contribution, and it is achieved by formulating the LPPM design problem as a Bayesian Stackelberg game between the user and the adversary. An additional benefit of our formal approach is that the design parameters of the LPPM are chosen by the optimization algorithm.","PeriodicalId":408627,"journal":{"name":"Proceedings of the 13th Workshop on Privacy in the Electronic Society","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130411575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Automated Social Graph De-anonymization Technique","authors":"K. Sharad, G. Danezis","doi":"10.1145/2665943.2665960","DOIUrl":"https://doi.org/10.1145/2665943.2665960","url":null,"abstract":"We present a generic and automated approach to re-identifying nodes in anonymized social networks which enables novel anonymization techniques to be quickly evaluated. It uses machine learning (decision forests) to matching pairs of nodes in disparate anonymized sub-graphs. The technique uncovers artefacts and invariants of any black-box anonymization scheme from a small set of examples. Despite a high degree of automation, classification succeeds with significant true positive rates even when small false positive rates are sought. Our evaluation uses publicly available real world datasets to study the performance of our approach against real-world anonymization strategies, namely the schemes used to protect datasets of The Data for Development (D4D) Challenge. We show that the technique is effective even when only small numbers of samples are used for training. Further, since it detects weaknesses in the black-box anonymization scheme it can re-identify nodes in one social network when trained on another.","PeriodicalId":408627,"journal":{"name":"Proceedings of the 13th Workshop on Privacy in the Electronic Society","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121624450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 13th Workshop on Privacy in the Electronic Society","authors":"Gail-Joon Ahn, Anupam Datta","doi":"10.1145/2665943","DOIUrl":"https://doi.org/10.1145/2665943","url":null,"abstract":"","PeriodicalId":408627,"journal":{"name":"Proceedings of the 13th Workshop on Privacy in the Electronic Society","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116538386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}