{"title":"Securing Critical Infrastructure Through Innovative Use Of Merged Hierarchical Deep Neural Networks","authors":"Lav Gupta","doi":"10.1109/PST52912.2021.9647771","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647771","url":null,"abstract":"Multi-clouds are becoming central to large and modern applications including those in business, industry and critical infrastructure sectors. Designers usually deploy these clouds hierarchically to get the best advantage of low latency of the edge clouds and high processing capabilities of the core clouds. The data that flows into the clouds for processing and storage and moves out to other system domains for further use must cross multiple trust boundaries, and as a result, face large attack surfaces. This gives malicious actors abundant opportunity to penetrate and potentially harm organizations or bring down critical services causing widespread disruptions and mayhem. Deep neural network models can be used in innovative ways to protect the confidentiality and integrity of dataflows in the clouds. However, the use of deep learning comes with some challenges. In large multi-location and multi-cloud environments, deep learning models grow rapidly in size and complexity, inhibiting fast training of cloud models and making it difficult to maintain accuracy of detection of known and unknown attacks on the data-in-motion. This impedes their use in critical infrastructure services. We propose innovative distributed-hierarchical-merged models, which make use of cooperative training at the edge and the core clouds, and the power of data and model parallelisms, to achieve rapid training with high accuracy. Our broad objectives in this paper are twofold: Firstly, we show that merged hierarchical deep learning models, working cooperatively in the multi-cloud, significantly reduce the parameters to be trained and results in faster core cloud training time. Secondly, training the merged core model with distribute strategy for data parallelism on CPUs and GPUs further reduces the training time significantly. We also show that while achieving improvement by about 25% over unmerged models and the accuracy of detection of unknown attacks in the range 96.9-99.5%.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125251755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DaRoute: Inferring trajectories from zero-permission smartphone sensors","authors":"C. Roth, N. Dinh, Marc Roßberger, D. Kesdogan","doi":"10.1109/PST52912.2021.9647811","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647811","url":null,"abstract":"Nowadays, smartphones are equipped with a multitude of sensors, including GPS, that enables location-based services. However, leakage or misuse of user locations poses a severe privacy threat, motivating operating systems to usually restrict direct access to these resources for applications. Nevertheless, this work demonstrates how an adversary can deduce sensitive location information by inferring a vehicle’s trajectory through inbuilt motion sensors collectible by zero-permission mobile apps. Therefore, the presented attack incorporates data from the accelerometer, the gyroscope, and the magnetometer. We then extract so-called path events from raw data to eventually match them against reference data from OpenStreetMap. At the example of real-world data from three different cities, several drivers, and different smartphones, we show that our approach can infer traveled routes with high accuracy within minutes while robust to sensor errors. Our experiments show that even for areas as large as approximately 4500 $mathrm{k}mathrm{m}^{2}$, the accuracy of detecting the correct route is as high as 87.14%, significantly outperforming similar approaches from Narain et al. and Waltereit et al.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114644578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noreddine Belhadj Cheikh, Abdessamad Imine, M. Rusinowitch
{"title":"FOX: Fooling with Explanations : Privacy Protection with Adversarial Reactions in Social Media","authors":"Noreddine Belhadj Cheikh, Abdessamad Imine, M. Rusinowitch","doi":"10.1109/PST52912.2021.9647778","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647778","url":null,"abstract":"Socia1 media data has been mined over the years to predict individual sensitive attributes such as political and religious beliefs. Indeed, mining such data can improve the user experience with personalization and freemium services. Still, it can also be harmful and discriminative when used to make critical decisions, such as employment. In this work, we investigate social media privacy protection against attribute inference attacks using machine learning explainability and adversarial defense strategies. More precisely, we propose FOX (FOoling with eXplanations), an adversarial attack framework to explain and fool sensitive attribute inference models by generating effective adversarial reactions. We evaluate the performance of FOX with other SOTA baselines in a black-box setting by attacking five gender attribute classifiers trained on Facebook pictures reactions, specifically (i) comments generated by Facebook users excluding the picture owner, and (ii) textual tags (i.e., alttext) generated by Facebook. Our experiments show that FOX successfully fools (about 99.7% and 93.2% of the time) the classifiers, outperforms the SOTA baselines and gives a good transferability of adversarial features.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121736514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Searching on Non-Systematic Erasure Codes","authors":"Atthapan Daramas, Vimal Kumar","doi":"10.1109/PST52912.2021.9647779","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647779","url":null,"abstract":"Non-Systematic erasure codes provide confidentiality of data and can even provide strong security guarantees with appropriate parameter selection. So far however, they have lacked an elegant and efficient method for keyword search over the codes. While the obvious method of reconstruction before search can be too slow, direct search produces inaccurate results. This has been one of the barriers in the wider adoption of non systematic erasure codes in distributed and secure storage. In this paper we present an elegant solution to this problem by building an index data structure that we call the Search Vector, created from the generator matrix of the erasure code. We show that this method introduces a very small amount of delay in return of a high degree of accuracy in search results. We also analyse the security of the scheme in terms of information leakage and show that the information leaked from the index data structure is very small even when one assumes the worst case scenario of the attacker having access to the Search Vector.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115134911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross the Chasm: Scalable Privacy-Preserving Federated Learning against Poisoning Attack","authors":"Yiran Li, Guiqiang Hu, Xiaoyuan Liu, Zuobin Ying","doi":"10.1109/PST52912.2021.9647750","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647750","url":null,"abstract":"Privacy protection and defense against poisoning attack and are two critical problems hindering the proliferation of federated learning (FL). However, they are two inherently contrary issues. For constructing a privacy-preserving FL, solutions tend to transform the original information (e.g., gradient information) to be indistinguishable. Nevertheless, to defend against poisoning attacks is required to identify the abnormal information via the distinguishability. Therefore, it is really a challenge to handle these two issues simultaneously under a unified framework. In this paper, we build a bridge between them, proposing a scalable privacy-preserving federated learning (SPPFL) against poisoning attacks. To be specific, based on the the technology of secure multi-party computation (MPC), we construct a secure framework to protect users’ privacy during the training process, while punishing poisoners via the method of distance evaluation. Besides, we implement extensive experiments to illustrate the performance of our scheme.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121702172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arthur Drichel, M. A. Gurabi, Tim Amelung, Ulrike Meyer
{"title":"Towards Privacy-Preserving Classification-as-a-Service for DGA Detection","authors":"Arthur Drichel, M. A. Gurabi, Tim Amelung, Ulrike Meyer","doi":"10.1109/PST52912.2021.9647755","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647755","url":null,"abstract":"Domain generation algorithm (DGA) classifiers can be used to detect and block the establishment of a connection between bots and their command-and-control server. Classification-as-a-service (CaaS) can separate the classification of domain names from the need for real-world training data, which are difficult to obtain but mandatory for well performing classifiers. However, domain names as well as trained models may contain privacy-critical information which should not be leaked to either the model provider or the data provider. Several generic frameworks for privacy-preserving machine learning (ML) have been proposed in the past that can preserve data and model privacy. Thus, it seems high time to combine state-of-the-art DGA classifiers and privacy-preservation frameworks to enable privacy-preserving CaaS, preserving both, data and model privacy for the DGA detection use case. In this work, we examine the real-world applicability of four generic frameworks for privacy-preserving ML using different state-of-the-art DGA detection models. Our results show that out-of-the-box DGA detection models are computationally infeasible for privacy-preserving inference in a real-world setting. We propose model simplifications that achieve a reduction in inference latency of up to 95%, and up to 97% in communication complexity while causing an accuracy penalty of less than 0.17%. Despite this significant improvement, real-time classification is still not feasible in a traditional two-party setting. Thus, more efficient secure multi-party computation (SMPC) or homomorphic encryption (HE) schemes are required to enable real-world feasibility of privacy-preserving CaaS for DGA detection.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115791523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Change Detection in Privacy Policies with Natural Language Processing","authors":"Andrick Adhikari, Rinku Dewri","doi":"10.1109/PST52912.2021.9647767","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647767","url":null,"abstract":"Privacy policies notify users about the privacy practices of websites, mobile apps, and other products and services. However, users rarely read them and struggle to understand their contents. Due to the complicated nature of these documents, it gets even harder to understand and take note of any changes of interest or concern when the policies are changed or revised. With advances in machine learning and natural language processing, tools that can automatically annotate sentences of policies have been developed. These annotations can help a user identify and understand relevant parts of a privacy policy. In this paper, we present our attempt to further such annotations by also detecting the important changes that occurred across sentences. Using supervised machine learning models, word-embedding, similarity matching, and structural analysis of sentences, we present a process that takes two different versions of a privacy policy as input, matches the sentences of one version to another based on semantic similarity, and identifies relevant changes between two matched sentences. We present the results and insights of applying our approach on 79 privacy policies manually downloaded from Facebook, WhatsApp, Twitter, Google, LinkedIn and Snapchat, ranging between the period of 1999 to 2020.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127388927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Light-Weight Active Security for Detecting DDoS Attacks in Containerised ICPS","authors":"Farzana Zahid, Matthew M. Y. Kuo, R. Sinha","doi":"10.1109/PST52912.2021.9647782","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647782","url":null,"abstract":"In Industrial Cyber-Physical Systems (ICPS), containerisation promises high scalability, reconfigurability and dependability. Denial of Service (DoD/DDoS) is a significant security threat in containerised ICPS applications, which execute on resource-constrained computers like PLCs, and cannot support traditional security mechanisms like firewalls that sacrifice performance and throughput. We propose a novel, light-weight active security approach to detecting DoS/DDoS attacks through frequency analysis of network traffic (packets). Our approach identifies attacks by recording a frequency signature of the flow of packets in an ICPS under normal operation. Subsequently, an attack is modelled as any anomalies in the network that modify the frequency profile of network traffic in the ICPS. Our prototype implementation and evaluation show that this active security method is light-weight and suitable for resource-constrained ICPS platforms.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125642261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SegmentPerturb: Effective Black-Box Hidden Voice Attack on Commercial ASR Systems via Selective Deletion","authors":"Ganyu Wang, Miguel Vargas Martin","doi":"10.1109/PST52912.2021.9647775","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647775","url":null,"abstract":"Voice control systems continue becoming more pervasive as they are deployed in mobile phones, smart home devices, automobiles, etc. Commonly, voice control systems have high privileges on the device, such as making a call or placing an order. However, they are vulnerable to voice attacks, which may lead to serious consequences. In this paper, we propose SegmentPerturb which crafts hidden voice commands via inquiring the target models. The general idea of SegmentPerturb is that we separate the original command audio into multiple equal-length segments and apply maximum perturbation on each segment by probing the target speech recognition system. We show that our method is as efficient, and in some aspects outperforms other methods from previous works. We choose four popular speech recognition APIs and one mainstream smart home device to conduct the experiments. Results suggest that this algorithm can generate voice commands which can be recognized by the machine but are hard to understand by a human.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130944390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User Identification in Online Social Networks using Graph Transformer Networks","authors":"K. N. P. Kumar, M. Gavrilova","doi":"10.1109/PST52912.2021.9647749","DOIUrl":"https://doi.org/10.1109/PST52912.2021.9647749","url":null,"abstract":"The problem of user recognition in online social networks is driven by the need for higher security. Previous recognition systems have extensively employed content-based features and temporal patterns to identify and represent distinctive characteristics within user profiles. This work reveals that semantic textual analysis and a graph representation of the user’s social network can be utilized to develop a user identification system. A graph transformer network architecture is proposed for the closed-set node identification task, leveraging the weighted social network graph as input. Users retweeting, mentioning, or replying to a target user’s tweet are considered neighbors in the social network graph and connected to the target user. The proposed user identification system outperforms all state-of-the-art systems. Moreover, we validate its performance on three publicly available datasets.","PeriodicalId":144610,"journal":{"name":"2021 18th International Conference on Privacy, Security and Trust (PST)","volume":"323 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123499466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}