{"title":"After GDPR, Still Tracking or Not? Understanding Opt-Out States for Online Behavioral Advertising","authors":"Takahito Sakamoto, Masahiro Matsunaga","doi":"10.1109/SPW.2019.00027","DOIUrl":"https://doi.org/10.1109/SPW.2019.00027","url":null,"abstract":"A recent trend in Internet advertising has been online behavioral advertising (OBA). For users concerned about their privacy, ad agencies provide an OBA opt-out opportunity. However, previous work has shown that many users misunderstand what it means to have opted out on an opt-out website. In fact, ad agencies still track the browsers of users who have opted out of OBA. In this study, we clarified what it means to be in the OBA opt-out state by crawling numerous websites and collecting browser cookies. Moreover, we analyzed the difference between the attitudes of agencies regarding OBA between before and after the EU General Data Protection Regulation (GDPR) was implemented. We found that around half of agencies stop web tracking after opt-out. However, some agencies start tracking again when users begin browsing, so the number of agencies stopping web tracking eventually declines. Furthermore, we found no evidence of a difference in agency attitudes between before and after GDPR implementation.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124707401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Malicious Campaigns in Obfuscated JavaScript with Scalable Behavioral Analysis","authors":"Oleksii Starov, Yuchen Zhou, Jun Wang","doi":"10.1109/SPW.2019.00048","DOIUrl":"https://doi.org/10.1109/SPW.2019.00048","url":null,"abstract":"Modern security crawlers and firewall solutions have to analyze millions of websites on a daily basis, and significantly more JavaScript samples. At the same time, fast static approaches, such as file signatures and hash matching, often are not enough to detect advanced malicious campaigns, i.e., obfuscated, packed, or randomized scripts. As such, low-overhead yet efficient dynamic analysis is required. In the current paper we describe behavioral analysis after executing all the scripts on web pages, similarly to how real browsers do. Then, we apply light \"behavioral signatures\" to the collected dynamic indicators, such as global variables declared during runtime, popup messages shown to the user, established WebSocket connections. Using this scalable method for a month, we enhanced the coverage of a commercial URL filtering product by detecting 8,712 URLs with intrusive coin miners. We evaluated the impact of increased coverage through telemetry data and discovered that customers attempted to visit these abusive sites more than a million times. Moreover, we captured 4,633 additional distinct URLs that lead to scam, clickjacking, phishing, and other kinds of malicious JavaScript. Our findings provide insight into recent trends in unauthorized cryptographic coin-mining and show that various scam kits are currently active on the Web.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131103464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"You Talk Too Much: Limiting Privacy Exposure Via Voice Input","authors":"Tavish Vaidya, M. Sherr","doi":"10.1109/SPW.2019.00026","DOIUrl":"https://doi.org/10.1109/SPW.2019.00026","url":null,"abstract":"Voice synthesis uses a voice model to synthesize arbitrary phrases. Advances in voice synthesis have made it possible to create an accurate voice model of a targeted individual, which can then in turn be used to generate spoofed audio in his or her voice. Generating an accurate voice model of target's voice requires the availability of a corpus of the target's speech. This paper makes the observation that the increasing popularity of voice interfaces that use cloud-backed speech recognition (e.g., Siri, Google Assistant, Amazon Alexa) increases the public's vulnerability to voice synthesis attacks. That is, our growing dependence on voice interfaces fosters the collection of our voices. As our main contribution, we show that voice recognition and voice accumulation (that is, the accumulation of users' voices) are separable. This paper introduces techniques for locally sanitizing voice inputs before they are transmitted to the cloud for processing. In essence, such methods employ audio processing techniques to remove distinctive voice characteristics, leaving only the information that is necessary for the cloud-based services to perform speech recognition. Our preliminary experiments show that our defenses prevent state-of-the-art voice synthesis techniques from constructing convincing forgeries of a user's speech, while still permitting accurate voice recognition.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124015265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IOTFLA : A Secured and Privacy-Preserving Smart Home Architecture Implementing Federated Learning","authors":"U. Aïvodji, S. Gambs, Alexandre Martin","doi":"10.1109/SPW.2019.00041","DOIUrl":"https://doi.org/10.1109/SPW.2019.00041","url":null,"abstract":"Slowly but steadily, the Internet of Things (IoT) is becoming more and more ubiquitous in our daily life. However, it also brings important security and privacy challenges along with it, especially in a sensitive context such as the smart home. In this position paper, we propose a novel architecture for smart home, called our, focusing on the security and privacy aspects, which combines federated learning with secure data aggregation. We hope that our proposition will provide a step forward towards achieving more security and privacy in smart homes.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122382318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sihang Liu, Yizhou Wei, Jianfeng Chi, F. H. Shezan, Yuan Tian
{"title":"Side Channel Attacks in Computation Offloading Systems with GPU Virtualization","authors":"Sihang Liu, Yizhou Wei, Jianfeng Chi, F. H. Shezan, Yuan Tian","doi":"10.1109/SPW.2019.00037","DOIUrl":"https://doi.org/10.1109/SPW.2019.00037","url":null,"abstract":"The Internet of Things (IoT) and mobile systems nowadays are required to perform more intensive computation, such as facial detection, image recognition and even remote gaming, etc. Due to the limited computation performance and power budget, it is sometimes impossible to perform these workloads locally. As high-performance GPUs become more common in the cloud, offloading the computation to the cloud becomes a possible choice. However, due to the fact that offloaded workloads from different devices (belonging to different users) are being computed in the same cloud, security concerns arise. Side channel attacks on GPU systems have been widely studied, where the threat model is the attacker and the victim are running on the same operating system. Recently, major GPU vendors have provided hardware and library support to virtualize GPUs for better isolation among users. This work studies the side channel attacks from one virtual machine to another where both share the same physical GPU. We show that it is possible to infer other user's activities in this setup and can further steal others deep learning model.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131188419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Casas, Gonzalo Marín, G. Capdehourat, Maciej Korczyński
{"title":"MLSEC - Benchmarking Shallow and Deep Machine Learning Models for Network Security","authors":"P. Casas, Gonzalo Marín, G. Capdehourat, Maciej Korczyński","doi":"10.1109/SPW.2019.00050","DOIUrl":"https://doi.org/10.1109/SPW.2019.00050","url":null,"abstract":"Network security represents a keystone to ISPs, who need to cope with an increasing number of network attacks that put the network's integrity at risk. The high-dimensionality of network data provided by current network monitoring systems opens the door to the massive application of Machine Learning (ML) approaches to improve the detection and classification of network attacks. In recent years, machine learning-based systems have gained popularity for network security applications, usually considering the application of shallow models, where a set of expert handcrafted features are needed to pre-process the data before training. Deep Learning (DL) models can alleviate the need of domain expert knowledge by relying on their ability to learn feature representations from input raw or basic, non-processed data. Still, it is not clear today which is the best model or best model-category to manage network security, as in general, only adhoc and tailored approaches have been proposed and evaluated so far. In this paper we train and benchmark different ML models for detection of network attacks in different real network data. We consider an extensive battery of supervised ML models, including both shallow and deep models, taking as input either pre-computed domain-knowledge based input features, or raw, byte-stream inputs. Proposed models are evaluated either using real, in the wild network measurements coming from the WIDE backbone network – the well-known MAWILab dataset, and through publicly available datasets. Results suggest that deep learning models can provide similar results to the best-performing shallow models, but without any sort of expert handcrafted inputs.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133415968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Membership Inference Attacks Against Adversarially Robust Deep Learning Models","authors":"Liwei Song, R. Shokri, Prateek Mittal","doi":"10.1109/SPW.2019.00021","DOIUrl":"https://doi.org/10.1109/SPW.2019.00021","url":null,"abstract":"In recent years, the research community has increasingly focused on understanding the security and privacy challenges posed by deep learning models. However, the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards enhancing our understanding of deep learning models when the two domains are combined together. We do this by measuring the success of membership inference attacks against two state-of-the-art adversarial defense methods that mitigate evasion attacks: adversarial training and provable defense. On the one hand, membership inference attacks aim to infer an individual's participation in the target model's training dataset and are known to be correlated with target model's overfitting. On the other hand, adversarial defense methods aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area around each sample in the training dataset. Intuitively, adversarial defenses may rely more on the training dataset and be more vulnerable to membership inference attacks. By performing empirical membership inference attacks on both adversarially robust models and corresponding undefended models, we find that the adversarial training method is indeed more susceptible to membership inference attacks, and the privacy leakage is directly correlated with model robustness. We also find that the provable defense approach does not lead to enhanced success of membership inference attacks. However, this is achieved by significantly sacrificing the accuracy of the model on benign data points, indicating that privacy, security, and prediction accuracy are not jointly achieved in these two approaches.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132707431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge is Power: Systematic Reuse of Privacy Knowledge for Threat Elicitation","authors":"Kim Wuyts, Laurens Sion, D. Landuyt, W. Joosen","doi":"10.1109/SPW.2019.00025","DOIUrl":"https://doi.org/10.1109/SPW.2019.00025","url":null,"abstract":"Privacy threat modeling is difficult. Identifying relevant threats that cause privacy harm requires an extensive assessment of common potential privacy issues for all elements in the system-under-analysis. In practice, the outcome of a threat modeling exercise thus strongly depends on the level of experience and expertise of the analyst. However, capturing (at least part of) this privacy expertise in a reusable threat knowledge base (i.e. an inventory of common threat types), such as LINDDUN's and STRIDE's threat trees, can greatly improve the efficiency of the threat elicitation process and the overall quality of identified threats. In this paper, we highlight the problems of current knowledge bases, such as limited semantics and lack of instantiation logic, and discuss the requirements for a privacy threat knowledge base that streamlines threat elicitation efforts.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122586203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Petr Gronát, Javier Alejandro Aldana-Iuit, M. Bálek
{"title":"MaxNet: Neural Network Architecture for Continuous Detection of Malicious Activity","authors":"Petr Gronát, Javier Alejandro Aldana-Iuit, M. Bálek","doi":"10.1109/SPW.2019.00018","DOIUrl":"https://doi.org/10.1109/SPW.2019.00018","url":null,"abstract":"This paper addresses the detection of malware activity in a running application on the Android system. The detection is based on dynamic analysis and is formulated as a weakly supervised problem. We design an RNN sequential architecture able to continuously detect malicious activity using the proposed max-loss objective. The experiments were performed on a large industrial dataset consisting of 361,265 samples. The results demonstrate the performance of 96.2% true positive rate at 1.6% false positive rate which is superior to the state-of-the-art results. As part of this work, we release the dataset to the public.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116584571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Counting Outdated Honeypots: Legal and Useful","authors":"Alexander Vetterl, R. Clayton, I. Walden","doi":"10.1109/SPW.2019.00049","DOIUrl":"https://doi.org/10.1109/SPW.2019.00049","url":null,"abstract":"Honeypots are intended to be covert and so little is known about how many are deployed or who is using them. We used protocol deviations at the SSH transport layer to fingerprint Kippo and Cowrie, the two most popular medium interaction SSH honeypots. Several Internet-wide scans over a one year period revealed the presence of thousands of these honeypots. Sending specific commands revealed their patch status and showed that many systems were not up to date: a quarter or more were not fully updated and by the time of our last scan 20% of honeypots were still running Kippo, which had last been updated several years earlier. However, our paper reporting these results was rejected from a major conference on the basis that our interactions with the honeypots were illegal and hence the research was unethical. We later published a much redacted account of our research which described the fingerprinting but omitted the results we had gained from the issuing of commands to check the patch status. In the present work we provide the missing results, but start with an extended ethical justification for our research and a detailed legal analysis to show why we did not infringe cybersecurity laws.","PeriodicalId":125351,"journal":{"name":"2019 IEEE Security and Privacy Workshops (SPW)","volume":"251 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123027612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}