Devyani Vij, Vivek Balachandran, Tony Thomas, Roopak Surendran
{"title":"GRAMAC","authors":"Devyani Vij, Vivek Balachandran, Tony Thomas, Roopak Surendran","doi":"10.1145/3374664.3379530","DOIUrl":"https://doi.org/10.1145/3374664.3379530","url":null,"abstract":"Android malware analysis has been an active area of research as the number and types of Android malwares have increased dramatically. Most of the previous works have used permission based model, behavioral analysis, and code analysis to identify the family of a malware. Code Analysis are weak against obfuscated approach, it does not include real time execution of the application. Behavioral analysis captures the runtime behavior but is weak when it comes to obfuscated applications. Permission based model only uses manifest files for analysing malwares. In this paper, we propose a novel graph signature based malware classification mechanism . The proposed graph signature uses sensitive API calls to capture the flow of control which helps to find a caller-callee relationship between the sensitive APIs and the nodes incident on them. A dataset of graph signatures of widely known malware families are then created. A new application's graph signature is compared with graph signatures in the dataset and the application is classified into the respective malware family or declared as goodware/unknown. Experiments with 15 malware families from the AMD dataset and a total of 400 applications gave an average accuracy of 0.97 with an error rate of 0.03.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122545837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FridgeLock","authors":"Fabian Franzen, Manuel Andreas, Manuel Huber","doi":"10.1145/3374664.3375747","DOIUrl":"https://doi.org/10.1145/3374664.3375747","url":null,"abstract":"To secure mobile devices, such as laptops and smartphones, against unauthorized physical data access, employing Full Disk Encryption (FDE) is a popular defense. This technique is effective if the device is always shut down when unattended. However, devices are often suspended instead of switched off. This leaves confidential data such as the FDE key, passphrases and user data in RAM which may be read out using cold boot, JTAG or DMA attacks. These attacks can be mitigated by encrypting the main memory during suspend. While this approach seems promising, it is not implemented on Windows or Linux. We present FridgeLock to add memory encryption on suspend to Linux. Our implementation as a Linux Kernel Module (LKM) does not require an admin to recompile the kernel. Using Dynamic Kernel Module Support (DKMS) allows for easy and fast deployment on existing Linux systems, where the distribution provides a prepackaged kernel and kernel updates. We tested our module on a range of 4.19 to 5.3 kernels and experienced a low performance impact, sustaining the system's usability. We hope that our tool leads to a more detailed evaluation of memory encryption in real world usage scenarios.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130123434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CREHMA","authors":"Hoai Viet Nguyen, L. Lo Iacono","doi":"10.1145/3374664.3375750","DOIUrl":"https://doi.org/10.1145/3374664.3375750","url":null,"abstract":"Scalability and security are two important elements of contemporary distributed software systems. The Web vividly shows that while complying with the constraints defined by the architectural style REST, the layered design of software with intermediate systems enables to scale at large. Intermediaries such as caches, however, interfere with the security guarantees of the industry standard for protecting data in transit on the Web, TLS, as in these circumstances the TLS channel already terminates at the intermediate system's server. For more in-depth defense strategies, service providers require message-oriented security means in addition to TLS. These are hardly available and only in the form of HTTP signature schemes that do not take caches into account either. In this paper we introduce CREHMA, a REST-ful HTTP message signature scheme that guarantees the integrity and authenticity of Web assets from end-to-end while simultaneous allowing service providers to enjoy the benefits of Web caches. Decisively, CREHMA achieves these guarantees without having to trust on the integrity of the cache and without requiring making changes to existing Web caching systems. In extensive experiments we evaluated CREHMA and found that it only introduces marginal impacts on metrics such as latency and data expansion while providing integrity protection from end to end. CREHMA thus extends the possibilities of service providers to achieve an appropriate balance between scalability and security.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124875990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiadong Sun, Xia Zhou, Wenbo Shen, Yajin Zhou, K. Ren
{"title":"PESC","authors":"Jiadong Sun, Xia Zhou, Wenbo Shen, Yajin Zhou, K. Ren","doi":"10.1145/3374664.3375734","DOIUrl":"https://doi.org/10.1145/3374664.3375734","url":null,"abstract":"Stack canary is the most widely deployed defense technique against stack buffer overflow attacks. However, since its proposition, the design of stack canary has very few improvements during the past 20 years, making it vulnerable to new and sophisticated attacks. For example, the ARM64 Linux kernel is still adopting the same design with StackGuard, using one global canary for the whole kernel. The x86_64 Linux kernel leverages a better design by using a per-task canary for different threads. Unfortunately, both of them are vulnerable to kernel memory leaks. Using the memory leak bugs or hardware side-channel attacks, e.g., Meltdown or Spectre, attackers can easily peek the kernel stack canary value, thus bypassing the protection. To address this issue, we proposed a fine-grained design of the kernel stack canary named PESC, standing for Per-System-Call Canary, which changes the kernel canary value on the system call basis. With PESC, attackers cannot accumulate any knowledge of prior canary across multiple system calls. In other words, PESC is resilient to memory leaks. Our key observation is that before serving a system call, the kernel stack is empty and there are no residual canary values on the stack. As a result, we can directly change the canary value on system call entry without the burden of tracking and updating old canary values on the kernel stack. Moreover, to balance the performance as well as the security, we proposed two PESC designs: one relies on the performance monitor counter register, termed as PESC-PMC, while the other one uses the kernel random number generator, denoted as PESC-RNG. We implemented both PESC-PMC and PESC-RNG on the real-world hardware, using HiKey960 board for ARM64 and Intel i7-7700 for x86_64. The synthetic benchmark and SPEC CPU2006 experimental results show that the performance overhead introduced by PESC-PMC and PESC-RNG on the whole system is less than 1%.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115310464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stuart Millar, Niall McLaughlin, Jesús Martínez del Rincón, Paul Miller, Ziming Zhao
{"title":"DANdroid","authors":"Stuart Millar, Niall McLaughlin, Jesús Martínez del Rincón, Paul Miller, Ziming Zhao","doi":"10.1145/3374664.3375746","DOIUrl":"https://doi.org/10.1145/3374664.3375746","url":null,"abstract":"We present DANdroid, a novel Android malware detection model using a deep learning Discriminative Adversarial Network (DAN) that classifies both obfuscated and unobfuscated apps as either malicious or benign. Our method, which we empirically demonstrate is robust against a selection of four prevalent and real-world obfuscation techniques, makes three contributions. Firstly, an innovative application of discriminative adversarial learning results in malware feature representations with a strong degree of resilience to the four obfuscation techniques. Secondly, the use of three feature sets; raw opcodes, permissions and API calls, that are combined in a multi-view deep learning architecture to increase this obfuscation resilience. Thirdly, we demonstrate the potential of our model to generalize over rare and future obfuscation methods not seen in training. With an overall dataset of 68,880 obfuscated and unobfuscated malicious and benign samples, our multi-view DAN model achieves an average F-score of 0.973 that compares favourably with the state-of-the-art, despite being exposed to the selected obfuscation methods applied both individually and in combination.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122794319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Baseline for Attribute Disclosure Risk in Synthetic Data","authors":"Markus Hittmeir, Rudolf Mayer, Andreas Ekelhart","doi":"10.1145/3374664.3375722","DOIUrl":"https://doi.org/10.1145/3374664.3375722","url":null,"abstract":"The generation of synthetic data is widely considered as viable method for alleviating privacy concerns and for reducing identification and attribute disclosure risk in micro-data. The records in a synthetic dataset are artificially created and thus do not directly relate to individuals in the original data in terms of a 1-to-1 correspondence. As a result, inferences about said individuals appear to be infeasible and, simultaneously, the utility of the data may be kept at a high level. In this paper, we challenge this belief by interpreting the standard attacker model for attribute disclosure as classification problem. We show how disclosure risk measures presented in recent publications may be compared to or even be reformulated as machine learning classification models. Our overall goal is to empirically analyze attribute disclosure risk in synthetic data and to discuss its close relationship to data utility. Moreover, we improve the baseline for attribute disclosure risk from the attacker's perspective by applying variants of the RadiusNearestNeighbor and the EnsembleVote classifier.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129921077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruijia Hu, Wyatt Dorris, Nishant Vishwamitra, Feng Luo, Matthew Costello
{"title":"On the Impact of Word Representation in Hate Speech and Offensive Language Detection and Explanation","authors":"Ruijia Hu, Wyatt Dorris, Nishant Vishwamitra, Feng Luo, Matthew Costello","doi":"10.1145/3374664.3379535","DOIUrl":"https://doi.org/10.1145/3374664.3379535","url":null,"abstract":"Online hate speech and offensive language have been widely recognized as critical social problems. To defend against this problem, several recent works have emerged that focus on the detection and explanation of hate speech and offensive language using machine learning approaches. Although these approaches are quite effective in the detection and explanation of hate speech and offensive language samples, they do not explore the impact of the representation of such samples. In this work, we introduce a novel, pronunciation-based representation of hate speech and offensive language samples to enable its detection with high accuracy. To demonstrate the effectiveness of our pronunciation-based representation, we extend an existing hate-speech and offensive language defense model based on deep Long Short-term Memory (LSTM) neural networks by using our pronunciation-based representation of hate speech and offensive language samples to train this model. Our work finds that the pronunciation-based presentation significantly reduces noise in the datasets and enhances the overall performance of the existing model.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131011258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 4: Privacy I","authors":"M. Fernández","doi":"10.1145/3388500","DOIUrl":"https://doi.org/10.1145/3388500","url":null,"abstract":"","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132279553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard Matovu, Abdul Serwadda, A. Bilbao, Isaac Griswold-Steiner
{"title":"Defensive Charging: Mitigating Power Side-Channel Attacks on Charging Smartphones","authors":"Richard Matovu, Abdul Serwadda, A. Bilbao, Isaac Griswold-Steiner","doi":"10.1145/3374664.3375732","DOIUrl":"https://doi.org/10.1145/3374664.3375732","url":null,"abstract":"Mobile devices are increasingly relied upon in user's daily lives. This dependence supports a growing network of mobile device charging hubs in public spaces such as airports. Unfortunately, the public nature of these hubs make them vulnerable to tampering. By embedding illicit power meters in the charging stations an attacker can launch power side-channel attacks aimed at inferring user activity on smartphones (e.g., web browsing or typing patterns). In this paper, we present three power side-channel attacks that can be launched by an adversary during the phone charging process. Such attacks use machine learning to identify unique patterns hidden in the measured current draw and infer information about a user's activity. To defend against these attacks, we design and rigorously evaluate two defense mechanisms, a hardware-based and software-based solution. The defenses randomly perturb the current drawn during charging thereby masking the unique patterns of the user's activities. Our experiments show that the two defenses force each one of the attacks to perform no better than random guessing. In practice, the user would only need to choose one of the defensive mechanisms to protect themselves against intrusions involving power draw analysis.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134413190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Admin-CBAC: An Administration Model for Category-Based Access Control","authors":"Clara Bertolissi, M. Fernández, B. Thuraisingham","doi":"10.1145/3374664.3375725","DOIUrl":"https://doi.org/10.1145/3374664.3375725","url":null,"abstract":"We present Admin-CBAC, an administrative model for Category- Based Access Control (CBAC). Since most of the access control models in use nowadays are instances of CBAC, in particular the popular RBAC and ABAC models, from Admin-CBAC we derive administrative models for RBAC and ABAC too. We define Admin- CBAC using Barker's metamodel, and use its axiomatic semantics to derive properties of administrative policies. Using an abstract operational semantics for administrative actions, we show how properties (such as safety, liveness and effectiveness of policies) and constraints (such as separation of duties) can be checked, and discuss the impact of policy changes. Although the most interesting properties of policies are generally undecidable in dynamic access control models, we identify particular cases where reachability based properties are decidable and can be checked using our operational semantics, generalising previous results for RBAC and ABACalpha.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129883085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}