{"title":"16th Workshop on Privacy in the Electronic Society (WPES 2017)","authors":"Adam J. Lee","doi":"10.1145/3133956.3137047","DOIUrl":"https://doi.org/10.1145/3133956.3137047","url":null,"abstract":"The 16th Workshop on Privacy in the Electronic Society was held on October 30, 2017 in conjunction with the 24th ACM Conference on Computer and Communications Security (CCS 2017) in Dallas, Texas, USA. The goal of WPES is to bring together a diverse group of privacy researchers and practitioners to discuss privacy problems that arise in global, interconnected societies, and potential solutions to them. The program for the workshop contains 14 full papers and 5 short papers selected from a total of 56 submissions. Specific topics covered in the program include but are not limited to: de-anonymization, fingerprinting and profiling, location privacy, and private memory systems.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"332 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114964299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Kolesnikov, J. Nielsen, Mike Rosulek, Ni Trieu, Roberto Trifiletti
{"title":"DUPLO: Unifying Cut-and-Choose for Garbled Circuits","authors":"V. Kolesnikov, J. Nielsen, Mike Rosulek, Ni Trieu, Roberto Trifiletti","doi":"10.1145/3133956.3133991","DOIUrl":"https://doi.org/10.1145/3133956.3133991","url":null,"abstract":"Cut-and-choose (CC) is the standard approach to making Yao's garbled circuit two-party computation (2PC) protocol secure against malicious adversaries. Traditional cut-and-choose operates at the level of entire circuits, whereas the LEGO paradigm (Nielsen & Orlandi, TCC 2009) achieves asymptotic improvements by performing cut-and-choose at the level of individual gates. In this work we propose a unified approach called DUPLO that spans the entire continuum between these two extremes. The cut-and-choose step in our protocol operates on the level of arbitrary circuit \"components,\" which can range in size from a single gate to the entire circuit itself. With this entire continuum of parameter values at our disposal, we find that the best way to scale 2PC to computations of realistic size is to use CC components of intermediate size, and not at the extremes. On computations requiring several millions of gates or more, our more general approach to CC gives between 4-7x improvement over existing approaches. In addition to our technical contributions of modifying and optimizing previous protocol techniques to work with general CC components, we also provide an extension of the recent Frigate circuit compiler (Mood et al, Euro S&P 2016) to effectively express any C-style program in terms of components which can be processed efficiently using our protocol.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117248946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"POSTER: Cyber Attack Prediction of Threats from Unconventional Resources (CAPTURE)","authors":"A. Okutan, Gordon Werner, K. McConky, S. Yang","doi":"10.1145/3133956.3138834","DOIUrl":"https://doi.org/10.1145/3133956.3138834","url":null,"abstract":"This paper outlines the design, implementation and evaluation of CAPTURE - a novel automated, continuously working cyber attack forecast system. It uses a broad range of unconventional signals from various public and private data sources and a set of signals forecasted via the Auto-Regressive Integrated Moving Average (ARIMA) model. While generating signals, auto cross correlation is used to find out the optimum signal aggregation and lead times. Generated signals are used to train a Bayesian classifier against the ground truth of each attack type. We show that it is possible to forecast future cyber incidents using CAPTURE and the consideration of the lead time could improve forecast performance.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127493046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Structurally Anomalous Logins Within Enterprise Networks","authors":"Hossein Siadati, N. Memon","doi":"10.1145/3133956.3134003","DOIUrl":"https://doi.org/10.1145/3133956.3134003","url":null,"abstract":"Many network intrusion detection systems use byte sequences to detect lateral movements that exploit remote vulnerabilities. Attackers bypass such detection by stealing valid credentials and using them to transmit from one computer to another without creating abnormal network traffic. We call this method Credential-based Lateral Movement. To detect this type of lateral movement, we develop the concept of a Network Login Structure that specifies normal logins within a given network. Our method models a network login structure by automatically extracting a collection of login patterns by using a variation of the market-basket algorithm. We then employ an anomaly detection approach to detect malicious logins that are inconsistent with the enterprise network's login structure. Evaluations show that the proposed method is able to detect malicious logins in a real setting. In a simulated attack, our system was able to detect 82% of malicious logins, with a 0.3% false positive rate. We used a real dataset of millions of logins over the course of five months within a global financial company for evaluation of this work.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127499902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Thuraisingham, David Evans, T. Malkin, Dongyan Xu
{"title":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","authors":"B. Thuraisingham, David Evans, T. Malkin, Dongyan Xu","doi":"10.1145/3133956","DOIUrl":"https://doi.org/10.1145/3133956","url":null,"abstract":"Welcome to the 24th ACM Conference on Computer and Communications Security! \u0000 \u0000Since 1993, CCS has been the ACM's flagship conference for research in all aspects of computing and communications security and privacy. This year's conference attracted a record number of 836 reviewed research paper submissions, of which a record number of 151 papers were selected for presentation at the conference and inclusion in the proceedings. \u0000 \u0000The papers were reviewed by a Program Committee of 146 leading researchers from academic, government, and industry from around the world. Reviewing was done in three rounds, with every paper being reviewed by two PC members in the first round, and additional reviews being assigned in later rounds depending on the initial reviews. Authors had an opportunity to respond to reviews received in the first two rounds. We used a subset of PC members, designated as the Discussion Committee, to help ensure that reviewers reconsidered their reviews in light of the author responses and to facilitate substantive discussions among the reviewers. Papers were discussed extensively on-line in the final weeks of the review process, and late reviews were requested from both PC members and external reviewers when additional expertise or perspective was needed to reach a decision. We are extremely grateful to the PC members for all their hard work in the review process, and to the external reviewers that contributed to selecting the papers for CCS. \u0000 \u0000Before starting the review process, of the 842 submissions the PC chairs removed six papers that clearly violated submission requirements or were duplicates, leaving 836 papers to review. In general, we were lenient on the requirements, only excluding papers that appeared to deliberately disregard the submission requirements. Instead of excluding papers which carelessly deanonymized the authors, or which abused appendices in the opinion of the chairs, we redacted (by modifying the submitted PDF) the offending content and allowed the papers to be reviewed, and offered to make redacted content in appendices available to reviewers upon request. \u0000 \u0000Our review process involved three phases. In the first phase, each paper was assigned two reviewers. Following last year's practice, we adopted the Toronto Paper Matching System (TPMS) for making most of the review assignments, which were then adjusted based on technical preferences declared by reviewers. Each reviewer had about 3 weeks to complete reviews for around 12 papers. Based on the results of these reviews, an additional reviewer was assigned to every paper that had at least one positive-leaning review. Papers where both initial reviews were negative, but with low confidence or significant positive aspects, were also assigned additional reviews. At the conclusion of the second reviewing round, authors had an opportunity to see the initial reviews and to submit a short rebuttal. To ensure that all the authors' responses were considered seriously ","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125182399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"POSTER: Probing Tor Hidden Service with Dockers","authors":"Jonghyeon Park, Youngseok Lee","doi":"10.1145/3133956.3138849","DOIUrl":"https://doi.org/10.1145/3133956.3138849","url":null,"abstract":"Tor is a commonly used anonymous network that provides the hidden services. As the number of hidden services using Tor's anonymous network has been steadily increasing every year, so does the number of services that abuse Tor's anonymity. The existing research on the Tor is mainly focused on Tor's security loopholes and anonymity. As a result, how to collect and analyze the contents of Tor's hidden services is not yet in full swing. In addition, due to the slow access speed of the Tor browser, it is difficult to observe the dynamics of the hidden services. In this work, we present a tool that can monitor the status of hidden services for the analysis of authentic hidden service contents, and have automated our tool with the virtualization software, Docker, to improve the crawling performance. From August 12, 2017 to August 20, 2017, we collected a total of 9,176 sites and analyzed contents for 100 pages.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126127818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"POSTER: PriReMat: A Distributed Tool for Privacy Preserving Record Linking in Healthcare","authors":"D. Kar, Ibrahim Lazrig, I. Ray, I. Ray","doi":"10.1145/3133956.3138845","DOIUrl":"https://doi.org/10.1145/3133956.3138845","url":null,"abstract":"Medical institutions must comply with various federal and state policies when they share sensitive medical data with others. Traditionally, such sharing is performed by sanitizing the identifying information from individual records. However, such sanitization removes the ability to later link the records belonging to the same patient across multiple institutions which is essential for medical cohort discovery. Currently, human honest brokers assume stewardship of non sanitized data and manually facilitate such cohort discovery. However, this is slow and prone to error, not to mention that any compromise of the honest broker breaks the system. In this work, we describe PriReMat, a toolset that we have developed for privacy preserving record linkage. The underlying protocol is based on strong security primitives that we had presented earlier. This work describes the distributed implementation over untrusted machines and networks.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115064488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"POSTER: An Empirical Measurement Study on Multi-tenant Deployment Issues of CDNs","authors":"Zixi Cai, Zigang Cao, G. Xiong, Z. Li, W. Xia","doi":"10.1145/3133956.3138852","DOIUrl":"https://doi.org/10.1145/3133956.3138852","url":null,"abstract":"Content delivery network (CDN) has been playing an important role in accelerating users' visit speed, bring good experience for popular web sites around the world. It has become a common security enhance service for CDN providers to offer HTTPS support to tenants. When several tenants are deployed to share a same IP address due to resource efficiency and cost, CDN providers should make comprehensive settings to ensure that all tenants' sites work correctly on users' requests. Otherwise, issues can take place such as denial of service (DOS) and privacy leakage, causing very bad user experience to users as well as potential economic loss for tenants, especially under the situation of hybrid deployment of HTTP and HTTPS. We examine the deployments of typical multi-tenant CDN providers by active measurement and find that CDN providers, namely Akaimai and ChinaCenter, have configuration problems which can result in DOS by certificate name mismatch error. Several advices are given to help to mitigate the issue. We believe that our study is meaningful for improving the security and the robustness of CDN.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122085356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"No-Match Attacks and Robust Partnering Definitions: Defining Trivial Attacks for Security Protocols is Not Trivial","authors":"Yong Li, Sven Schäge","doi":"10.1145/3133956.3134006","DOIUrl":"https://doi.org/10.1145/3133956.3134006","url":null,"abstract":"An essential cornerstone of the definition of security for key exchange protocols is the notion of partnering. The de-facto standard definition of partnering is that of (partial) matching conversations (MC), which essentially states that two processes are partnered if every message sent by the first is actually received by the second and vice versa. We show that proving security under MC-based definitions is error-prone. To this end, we introduce no-match attacks, a new class of attacks that renders many existing security proofs invalid. We show that no-match attacks are often hard to avoid in MC-based security definitions without a) modifications of the original protocol or b) resorting to the use of cryptographic primitives with special properties. Finally, we show several ways to thwart no-match attacks. Most notably and as one of our major contributions, we provide a conceptually new definition of partnering that circumvents the problems of a MC-based partnering notion while preserving all its advantages. Our new notion of partnering not only makes security definitions for key exchange model practice much more closely. In contrast to many other security notions of key exchange it also adheres to the high standards of good cryptographic definitions: it is general, supports cryptographic intuition, allows for efficient falsification, and provides a fundamental composition property that MC-based notions lack.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124645413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kee Sung Kim, Minkyu Kim, Dongsoo Lee, J. Park, Woo-Hwan Kim
{"title":"Forward Secure Dynamic Searchable Symmetric Encryption with Efficient Updates","authors":"Kee Sung Kim, Minkyu Kim, Dongsoo Lee, J. Park, Woo-Hwan Kim","doi":"10.1145/3133956.3133970","DOIUrl":"https://doi.org/10.1145/3133956.3133970","url":null,"abstract":"The recently proposed file-injection type attacks are highlighting the importance of forward security in dynamic searchable symmetric encryption (DSSE). Forward security enables to thwart those attacks by hiding the information about the newly added files matching a previous search query. However, there are still only a few DSSE schemes that provide forward security, and they have factors that hinder efficiency. In particular, all of these schemes do not support actual data deletion, which increments both storage space and computational complexity. In this paper, we design and implement a forward secure DSSE scheme with optimal search and update complexity, for both computation and communication point of view. As a starting point, we propose a new, simple, theoretical data structure, called dual dictionary that can take advantage of both the inverted and the forward indexes at the same time. This data structure allows to delete data explicitly and in real time, which greatly improves efficiency compared to previous works. In addition, our scheme provides forward security by encrypting the newly added data with fresh keys not related with the previous search tokens. We implemented our scheme for Enron email and Wikipedia datasets and measured its performance. The comparison with Sophos shows that our scheme is very efficient in practice, for both searches and updates in dynamic environments.","PeriodicalId":191367,"journal":{"name":"Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123660559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}