Sazzadur Rahaman, Ya Xiao, Sharmin Afrose, K. Tian, Miles Frantz, Na Meng, B. Miller, Fahad Shaon, Murat Kantarcioglu, D. Yao
{"title":"Deployment-quality and Accessible Solutions for Cryptography Code Development","authors":"Sazzadur Rahaman, Ya Xiao, Sharmin Afrose, K. Tian, Miles Frantz, Na Meng, B. Miller, Fahad Shaon, Murat Kantarcioglu, D. Yao","doi":"10.1145/3374664.3379536","DOIUrl":"https://doi.org/10.1145/3374664.3379536","url":null,"abstract":"Cryptographic API misuses seriously threatens software security. Automatic screening of cryptographic misuse vulnerabilities has been a popular and important line of research over the years. However, the vision of producing a scalable detection tool that developers can routinely use to screen millions of line of code has not been achieved yet. Our main technical goal is to attain a high precision and high throughput approach based on specialized program analysis. Specifically, we design inter-procedural program slicing on top of a new on-demand flow-, context- and field- sensitive data flow analysis. Our current prototype named CryptoGuard can detect a wide range of Java cryptographic API misuses with a precision of 98.61%, when evaluated on 46 complex Apache Software Foundation projects (including, Spark, Ranger, and Ofbiz). Our evaluation on 6,181 Android apps also generated many security insights. We created a comprehensive benchmark named CryptoApi-Bench with 40-unit basic cases and 131-unit advanced cases for in-depth comparison with leading solutions (e.g., SpotBugs, CrySL, Coverity). To make CryptoGuard widely accessible, we are in the process of integrating CryptoGuard with the Software Assurance Marketplace (SWAMP). SWAMP is a popular no-cost service for continuous software assurance and static code analysis.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123154600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ZeroLender","authors":"Yi Xie, Joshua Holmes, Gaby G. Dagher","doi":"10.1145/3374664.3375735","DOIUrl":"https://doi.org/10.1145/3374664.3375735","url":null,"abstract":"Since its inception a decade ago, Bitcoin and its underlying blockchain technology have been garnering interest from a large spectrum of financial institutions. Although it encompasses a currency, a payment method, and a ledger, Bitcoin as it currently stands does not support bitcoins lending. In this paper, we present a platform called ZeroLender for peer-to-peer lending in Bitcoin. Our protocol utilizes zero-knowledge proofs to achieve unlinkability between lenders and borrowers while securing payments in both directions against potential malicious behaviour of the ZeroLender as well as the lenders, and prove by simulation that our protocol is privacy-preserving. Based on our experiments, we show that the runtime and transcript size of our protocol scale linearly with respect to the number of lenders and repayments.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122475413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can AI be for Good in the Midst of Cyber Attacks and Privacy Violations?: A Position Paper","authors":"B. Thuraisingham","doi":"10.1145/3374664.3379334","DOIUrl":"https://doi.org/10.1145/3374664.3379334","url":null,"abstract":"Artificial Intelligence (AI) is affecting every aspect of our lives from healthcare to finance to driving to managing the home. Sophisticated machine learning techniques with a focus on deep learning are being applied successfully to detect cancer, to make the best choices for investments, to determine the most suitable routes for driving as well as to efficiently manage the electricity in our homes. We expect AI to have even more influence as advances are made with technology as well as in learning, planning, reasoning and explainable systems. While these advances will greatly advance humanity, organizations such as the United Nations have embarked on initiatives such as \"AI for Good\" and we can expect to see more emphasis on applying AI for the good of humanity especially in developing countries. However, the question that needs to be answered is Can AI be for Good when when the AI techniques can be attacked and the AI techniques themselves can cause privacy violations? This position paper will provide an overview of this topic with protecting children and children's rights as an example.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128723809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 5: Mobile Security","authors":"Phani Vadrevu","doi":"10.1145/3388502","DOIUrl":"https://doi.org/10.1145/3388502","url":null,"abstract":"","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"85 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123524169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Performance Study on Cryptographic Algorithms for IoT Devices","authors":"Eduardo Anaya, Jimil Patel, Prerak S. Shah, Vrushank Shah, Yuan Cheng","doi":"10.1145/3374664.3379531","DOIUrl":"https://doi.org/10.1145/3374664.3379531","url":null,"abstract":"Internet of Things (IoT) devices have grown in popularity over the past few years. These inter-connected devices collect and share data for automating industrial or household tasks. Despite its unprecedented growth, this paradigm currently faces many challenges that could hinder the deployment of such a system. These challenges include power, processing capabilities, and security, etc. Our project aims to explore these areas by studying an IoT network that secures data using common cryptographic algorithms, such as AES, ChaCha20, RSA, and Twofish. We measure computational time and power usage while running these cryptographic algorithms on IoT devices. Our findings show that while Twofish is the most power-efficient, Chacha20 is overall the most suitable one for IoT devices.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126393102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Privacy Awareness in Android App Descriptions Using Deep Learning","authors":"Johannes Feichtner, Stefan Gruber","doi":"10.1145/3374664.3375730","DOIUrl":"https://doi.org/10.1145/3374664.3375730","url":null,"abstract":"Permissions are a key factor in Android to protect users' privacy. As it is often not obvious why applications require certain permissions, developer-provided descriptions in Google Play and third-party markets should explain to users how sensitive data is processed. Reliably recognizing whether app descriptions cover permission usage is challenging due to the lack of enforced quality standards and a variety of ways developers can express privacy-related facts. We introduce a machine learning-based approach to identify critical discrepancies between developer-described app behavior and permission usage. By combining state-of-the-art techniques in natural language processing (NLP) and deep learning, we design a convolutional neural network (CNN) for text classification that captures the relevance of words and phrases in app descriptions in relation to the usage of dangerous permissions. Our system predicts the likelihood that an app requires certain permissions and can warn about descriptions in which the requested access to sensitive user data and system features is textually not represented. We evaluate our solution on 77,000 real-world app descriptions and find that we can identify individual groups of dangerous permissions with a precision between 71% and 93%. To highlight the impact of individual words and phrases, we employ a model explanation algorithm and demonstrate that our technique can successfully bridge the semantic gap between described app functionality and its access to security- and privacy-sensitive resources.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116017533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edoardo Serra, Anu Shrestha, Francesca Spezzano, A. Squicciarini
{"title":"DeepTrust: An Automatic Framework to Detect Trustworthy Users in Opinion-based Systems","authors":"Edoardo Serra, Anu Shrestha, Francesca Spezzano, A. Squicciarini","doi":"10.1145/3374664.3375744","DOIUrl":"https://doi.org/10.1145/3374664.3375744","url":null,"abstract":"Opinion spamming has recently gained attention as more and more online platforms rely on users' opinions to help potential customers make informed decisions on products and services. Yet, while work on opinion spamming abounds, most efforts have focused on detecting an individual reviewer as spammer or fraudulent. We argue that this is no longer sufficient, as reviewers may contribute to an opinion-based system in various ways, and their input could range from highly informative to noisy or even malicious. In an effort to improve the detection of trustworthy individuals within opinion-based systems, in this paper, we develop a supervised approach to differentiate among different types of reviewers. Particularly, we model the problem of detecting trustworthy reviewers as a multi-class classification problem, wherein users may be fraudulent, unreliable or uninformative, or trustworthy. We note that expanding from the classic binary classification of trustworthy/untrustworthy (or malicious) reviewers is an interesting and challenging problem. Some untrustworthy reviewers may behave similarly to reliable reviewers, and yet be rooted by dark motives. On the contrary, other untrustworthy reviewers may not be malicious but rather lazy or unable to contribute to the common knowledge of the reviewed item. Our proposed method, DeepTrust, relies on a deep recurrent neural network that provides embeddings aggregating temporal information: we consider users' behavior over time, as they review multiple products. We model the interactions of reviewers and the products they review using a temporal bipartite graph and consider the context of each rating by including other reviewers' ratings of the same items. We carry out extensive experiments on a real-world dataset of Amazon reviewers, with known ground truth about spammers and fraudulent reviews. Our results show that DeepTrust can detect trustworthy, uninformative, and fraudulent users with an F1-measure of 0.93. Also, we drastically improve on detecting fraudulent reviewers (AUROC of 0.97 and average precision of 0.99 when combining DeepTrust with the F&G algorithm) as compared to REV2 state-of-the-art methods (AUROC of 0.79 and average precision of 0.48). Further, DeepTrust is robust to cold start users and overperforms all existing baselines.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129389574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Service-Oriented Modeling for Cyber Threat Analysis","authors":"Kees Leune, Sung Kim","doi":"10.1145/3374664.3379528","DOIUrl":"https://doi.org/10.1145/3374664.3379528","url":null,"abstract":"The future of enterprise cyber defense is predictive and the use of model-based threat hunting is an enabling technique. Current approaches to threat modeling are predicated on the assumption that models are used to develop better software, rather than to describe threats to software being used as a service (SaaS). In this paper, we propose a service-modeling methodology that will facilitate pro-active cyber defense for organizations adopting SaaS. We model structural and dynamic elements to provide a robust representation of the defensible system. Our approach is validated by implementing a prototype and by using it to model a popular course management system.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127291698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 3: Adversarial Machine Learning","authors":"A. Singhal","doi":"10.1145/3388499","DOIUrl":"https://doi.org/10.1145/3388499","url":null,"abstract":"","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128897355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Bertino, A. Singhal, Srivathsan Srinivasagopalan, Rakesh M. Verma
{"title":"Developing A Compelling Vision for Winning the Cybersecurity Arms Race","authors":"E. Bertino, A. Singhal, Srivathsan Srinivasagopalan, Rakesh M. Verma","doi":"10.1145/3374664.3379538","DOIUrl":"https://doi.org/10.1145/3374664.3379538","url":null,"abstract":"In cybersecurity there is a continuous arms race between the attackers and the defenders. In this panel, we investigate three key questions regarding this arms race. First question is whether this arms race is winnable. Second, if the answer to the first question is in the affirmative, what steps we need to take to win this race. Third, if the answer to the first question is negative, what is the justification for this and what steps can we take to improve the state of affairs and increase the bar for the attackers significantly.","PeriodicalId":171521,"journal":{"name":"Proceedings of the Tenth ACM Conference on Data and Application Security and Privacy","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126614297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}