{"title":"Attributing and Detecting Fake Images Generated by Known GANs","authors":"Matthew Joslin, S. Hao","doi":"10.1109/SPW50608.2020.00019","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00019","url":null,"abstract":"The quality of GAN-generated fake images has improved significantly, and recent GAN approaches, such as StyleGAN, achieve near indistinguishability from real images for the naked eye. As a result, adversaries are attracted to using GAN-generated fake images for disinformation campaigns and fraud on social networks. However, training an image generation network to produce realistic-looking samples remains a time-consuming and difficult problem, so adversaries are more likely to use published GAN models to generate fake images. In this paper, we analyze the frequency domain to attribute and detect fake images generated by a known GAN model. We derive a similarity metric on the frequency domain and develop a new approach for GAN image attribution. We conduct experiments on four trained GAN models and two real image datasets. Our results show high attribution accuracy against real images and those from other GAN models. We further analyze our method under evasion attempts and find the frequency-based approach is comparatively robust. In this paper, we analyze the frequency domain to attribute and detect fake images generated by a known GAN model. We derive a similarity metric on the frequency domain and develop a new approach for GAN image attribution. We conduct experiments on four trained GAN models and two real image datasets. Our results show high attribution accuracy against real images and those from other GAN models. We further analyze our method under evasion attempts and find the frequency-based approach is comparatively robust.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116210120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Framework for the Analysis of Deep Neural Networks in Autonomous Aerospace Applications using Bayesian Statistics","authors":"Yuning He, J. Schumann","doi":"10.1109/SPW50608.2020.00054","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00054","url":null,"abstract":"Deep Neural Networks (DNNs) are considered to be key components in many autonomous systems. Applications range from vision-based obstacle avoidance to intelligent/learning control and planning. Safety-critical applications as found in the aerospace domain require that the behavior of the DNN is validated and tested rigorously for safety of the autonomous system (AUS). In this paper, we present a framework to support testing of DNNs and the analysis of the network structure. Our framework employs techniques from statistical modeling and active learning to effectively generate test cases for DNN safety testing and performance analysis. We will present results of a case study on a physics-based Deep recurrent residual neural network (DR-RNN), which has been trained to emulate the aerodynamics behavior of a fixed-wing aircraft.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"417 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117320678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning from Context: A Multi-View Deep Learning Architecture for Malware Detection","authors":"Adarsh Kyadige, Ethan M. Rudd, Konstantin Berlin","doi":"10.1109/SPW50608.2020.00018","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00018","url":null,"abstract":"Machine learning (ML) classifiers used for malware detection typically employ numerical representations of the content of each file when making malicious/benign determinations. However, there is also relevant information that can be gleaned from the context in which the file was seen which is often ignored. One source of contextual information is the file's location on disk. For example, a malicious file masquerading as a known benign file (e.g., a Windows system DLL) is more likely to appear suspicious if the detector can intelligibly utilize information about the path at which it resides. Knowledge of the file path information could also make it easier to detect files which try to evade disk scans by placing themselves in specific locations. File paths are also available with little overhead and can seamlessly be integrated into a multi-view static ML detector, potentially yielding higher detection rates at very high throughput and minimal infrastructural changes. In this work, we propose a multi-view deep neural network architecture, which takes feature vectors from the PE file content as well as corresponding file paths as inputs and outputs a detection score. We perform an evaluation on a commercial-scale dataset of approximately 10 million samples - files and file paths from user endpoints serviced by an actual security vendor. We then conduct an interpretability analysis via LIME modeling to ensure that our classifier has learned a sensible representation and examine how the file path contributes to change in the classifier's score in different cases. We find that our model learns useful aspects of the file path for classification, resulting in a 26.6% improvement in the true positive rate at a 0.001 false positive rate (FPR) and a 64.6% improvement at 0.0001 FPR, compared to a model that operates on PE file content only.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115745323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Smart City Internet for Autonomous Systems","authors":"Gregory Falco","doi":"10.1109/SPW50608.2020.00051","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00051","url":null,"abstract":"A smart city involves critical infrastructure systems that have been digitally enabled. Increasingly, many smart city cyber-physical systems are becoming automated. The extent of automation ranges from basic logic gates to sophisticated, artificial intelligence (AI) that enables fully autonomous systems. Because of modern society's reliance on autonomous systems in smart cities, it is crucial for them to operate in a safe manner; otherwise, it is feasible for these systems to cause considerable physical harm or even death. Because smart cities could involve thousands of autonomous systems operating in concert in densely populated areas, safety assurances are required. Challenges abound to consistently manage the safety of such autonomous systems due to their disparate developers, manufacturers, operators and users. A novel network and a sample of associated network functions for autonomous systems is proposed that aims to provide a baseline of safety for autonomous systems in a smart city ecosystem. A proposed network called the Assured Autonomous Cyber-Physical Ecosystem (AACE) would be separate from the Internet, and enforces certain functions that enable safety through active networking. Each smart city could dictate the functions for their own AACE, providing a means for enforcing safety policies across disparate autonomous systems operating in the city's jurisdiction. Such a network design sits at the margins of the end-to-end principle, which is warranted considering the safety of autonomous systems is at stake as is argued in this paper. Without a scalable safety strategy for autonomous systems as proposed, assured autonomy in smart cities will remain elusive.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125237787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Armor Within: Defending Against Vulnerabilities in Third-Party Libraries","authors":"Sameed Ali, Prashant Anantharaman, Sean W. Smith","doi":"10.1109/SPW50608.2020.00063","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00063","url":null,"abstract":"Vulnerabilities in third-party software modules have resulted in severe security flaws, including remote code execution and denial of service. However, current approaches to securing such libraries suffer from one of two problems. First, they do not perform sufficiently well to be applicable in practice and incur high CPU and memory overheads. Second, they are also harder to apply to legacy and proprietary systems when the source code of the application is not available. There is, therefore, a dire need to secure the internal boundaries within an application to ensure vulnerable software modules are not exploitable via crafted input attacks. We present a novel approach to secure third-party software modules without requiring access to the source code of the program. First, using the foundations of language-theoretic security, we build a validation filter for the vulnerable module. Using the foundations of linking and loading, we present two different ways to insert that filter between the main code and the vulnerable module. Finally, using the foundations of ELF-based access control, we ensure any entry into the vulnerable module must first go through the filter. We evaluate our approaches using three known real-world exploits in two popular libraries-libpng and libxml. We were able to successfully prevent all three exploits from executing.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114448068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew McCormack, Sanjay Chandrasekaran, Guyue Liu, Tian-jiao Yu, Sandra DeVincent Wolf, V. Sekar
{"title":"Security Analysis of Networked 3D Printers","authors":"Matthew McCormack, Sanjay Chandrasekaran, Guyue Liu, Tian-jiao Yu, Sandra DeVincent Wolf, V. Sekar","doi":"10.1109/SPW50608.2020.00035","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00035","url":null,"abstract":"Networked 3D printers are an emerging trend in manufacturing. However, many have poor security controls, allowing attackers to cause physical hazards, create defective safety-critical parts, steal proprietary data, and halt costly operations. Prior work has given limited attention to identifying if a network attacker is able to achieve these goals. In this work, we present C3PO, an open-source network security analysis tool that systematically identifies security threats to networked 3D printers. C3PO's design is guided by industry standards and best practices, identifying potential vulnerabilities in data transfer, the printing application, availability, and exposed network services. Furthermore, C3PO analyzes how a network deployment impacts a 3D printer's security, such as an attacker compromising an IoT camera in order to send malicious commands to a networked 3D printer. We use C3PO to analyze 13 networked 3D printers and 5 real-world manufacturing network deployments. We identified 8 types of network security vulnerabilities such as a susceptibility to low-rate denial of service attacks, the transmission of unencrypted data, and publicly accessible network deployments.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115044747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sam Hylamia, Wenqing Yan, André Teixeira, N. B. Asan, M. Pérez, R. Augustine, T. Voigt
{"title":"Privacy-preserving Continuous Tumour Relapse Monitoring Using In-body Radio Signals","authors":"Sam Hylamia, Wenqing Yan, André Teixeira, N. B. Asan, M. Pérez, R. Augustine, T. Voigt","doi":"10.1109/SPW50608.2020.00030","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00030","url":null,"abstract":"Early detection and treatment of cancerous tumours significantly improve the lives of cancer patients, as well as increase their chance of surviving and reduce treatment cost. A novel study has utilised the human adipose (fat) tissue as a propagation channel for radio frequency communication within the human body. A notable application of this technology is the continuous monitoring of the growth of perturbants, such as tumours, in the channel. This paper addresses the privacy issues associated with the deployment of this monitoring technology. Our work departs from previous studies in that we consider the privacy of the sensing process itself, rather than the privacy of sensed data. We study the information leakage associated with the deployment of this technology and propose and evaluate a set of privacy-enhancing techniques that reduces information leakage. Finally, we propose and evaluate an approach that combines these techniques and, thereby, protects patient's privacy.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131239398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhanyuan Zhang, Benson Yuan, Michael McCoyd, David A. Wagner
{"title":"Clipped BagNet: Defending Against Sticker Attacks with Clipped Bag-of-features","authors":"Zhanyuan Zhang, Benson Yuan, Michael McCoyd, David A. Wagner","doi":"10.1109/SPW50608.2020.00026","DOIUrl":"https://doi.org/10.1109/SPW50608.2020.00026","url":null,"abstract":"Many works have demonstrated that neural networks are vulnerable to adversarial examples. We examine the adversarial sticker attack, where the attacker places a sticker somewhere on an image to induce it to be misclassified. We take a first step towards defending against such attacks using clipped BagNet, which bounds the influence that any limited-size sticker can have on the final classification. We evaluate our scheme on ImageNet and show that it provides strong security against targeted PGD attacks and gradient-free attacks, and yields certified security for a 95% of images against a targeted 20 × 20 pixel attack.","PeriodicalId":413600,"journal":{"name":"2020 IEEE Security and Privacy Workshops (SPW)","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122060778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}