{"title":"Network Anomaly Detection Using Transfer Learning Based on Auto-Encoders Loss Normalization","authors":"Aviv Yehezkel, Eyal Elyashiv, Or Soffer","doi":"10.1145/3474369.3486869","DOIUrl":"https://doi.org/10.1145/3474369.3486869","url":null,"abstract":"Anomaly detection is a classic, long-term research problem. Previous attempts to solve it have used auto-encoders to learn a representation of the normal behaviour of networks and detect anomalies according to reconstruction loss. In this paper, we study the problem of anomaly detection in computer networks and propose the concept of \"auto-encoder losses transfer learning\". This approach normalizes auto-encoder losses in different model deployments, providing the ability to transform loss vectors of different networks with potentially significant varying characteristics, properties, and behaviors into a domain invariant representation. This is forwarded to a global detection model that can detect and classify threats in a generalized way that is agnostic to the specific network deployment, allowing for comprehensive network coverage.","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117152783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zeliang Kan, Feargus Pendlebury, Fabio Pierazzi, L. Cavallaro
{"title":"Investigating Labelless Drift Adaptation for Malware Detection","authors":"Zeliang Kan, Feargus Pendlebury, Fabio Pierazzi, L. Cavallaro","doi":"10.1145/3474369.3486873","DOIUrl":"https://doi.org/10.1145/3474369.3486873","url":null,"abstract":"The evolution of malware has long plagued machine learning-based detection systems, as malware authors develop innovative strategies to evade detection and chase profits. This induces concept drift as the test distribution diverges from the training, causing performance decay that requires constant monitoring and adaptation. In this work, we analyze the adaptation strategy used by DroidEvolver, a state-of-the-art learning system that self-updates using pseudo-labels to avoid the high overhead associated with obtaining a new ground truth. After removing sources of experimental bias present in the original evaluation, we identify a number of flaws in the generation and integration of these pseudo-labels, leading to a rapid onset of performance degradation as the model poisons itself. We propose DroidEvolver++, a more robust variant of DroidEvolver, to address these issues and highlight the role of pseudo-labels in addressing concept drift. We test the tolerance of the adaptation strategy versus different degrees of pseudo-label noise and propose the adoption of methods to ensure only high-quality pseudo-labels are used for updates. Ultimately, we conclude that the use of pseudo-labeling remains a promising solution to limitations on labeling capacity, but great care must be taken when designing update mechanisms to avoid negative feedback loops and self-poisoning which have catastrophic effects on performance.","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114783237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SEAT","authors":"Zhanyuan Zhang, Yizheng Chen, David A. Wagner","doi":"10.1163/1574-9347_bnp_e1124640","DOIUrl":"https://doi.org/10.1163/1574-9347_bnp_e1124640","url":null,"abstract":"This graph shows how this district compares by its percentile with other U.S. congressional districts on three metrics: number of violations, number of violations per inspection, and number of violations per enforcement action. These metrics are used on the data from each of the three EPA programs– the Clean Water Act (CWA), the Clean Air Act (CAA) and the Resource Conservation and Recovery Act (RCRA). The data used is for the past five years, 2017 through 2021.","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125841124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. P. Drees, Pritha Gupta, E. Hüllermeier, Tibor Jager, Alexander Konze, Claudia Priesterjahn, Arunselvan Ramaswamy, Juraj Somorovsky
{"title":"Automated Detection of Side Channels in Cryptographic Protocols: DROWN the ROBOTs!","authors":"J. P. Drees, Pritha Gupta, E. Hüllermeier, Tibor Jager, Alexander Konze, Claudia Priesterjahn, Arunselvan Ramaswamy, Juraj Somorovsky","doi":"10.1145/3474369.3486868","DOIUrl":"https://doi.org/10.1145/3474369.3486868","url":null,"abstract":"Currently most practical attacks on cryptographic protocols like TLS are based on side channels, such as padding oracles. Some well-known recent examples are DROWN, ROBOT and Raccoon (USENIX Security 2016, 2018, 2021). Such attacks are usually found by careful and time-consuming manual analysis by specialists. In this paper, we consider the question of how such attacks can be systematically detected and prevented before (large-scale) deployment. We propose a new, fully automated approach, which uses supervised learning to identify arbitrary patterns in network protocol traffic. In contrast to classical scanners, which search for known side channels, the detection of general patterns might detect new side channels, even unexpected ones, such as those from the ROBOT attack. To analyze this approach, we develop a tool to detect Bleichenbacher-like padding oracles in TLS server implementations, based on an ensemble of machine learning algorithms. We verify that the approach indeed detects known vulnerabilities successfully and reliably. The tool also provides detailed information about detected patterns to developers, to assist in removing a potential padding oracle. Due to the automation, the approach scales much better than manual analysis and could even be integrated with a CI/CD pipeline of a development environment, for example.","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131396402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SEAT: Similarity Encoder by Adversarial Training for Detecting Model Extraction Attack Queries","authors":"Zhanyuan Zhang, Yizheng Chen, David A. Wagner","doi":"10.1145/3474369.3486863","DOIUrl":"https://doi.org/10.1145/3474369.3486863","url":null,"abstract":"Given black-box access to the prediction API, model extraction attacks can steal the functionality of models deployed in the cloud. In this paper, we introduce the SEAT detector, which detects black-box model extraction attacks so that the defender can terminate malicious accounts. SEAT has a similarity encoder trained by adversarial training. Using the similarity encoder, SEAT detects accounts that make queries that indicate a model extraction attack in progress and cancels these accounts. We evaluate our defense against existing model extraction attacks and against new adaptive attacks introduced in this paper. Our results show that even against adaptive attackers, SEAT increases the cost of model extraction attacks by 3.8 times to 16 times.","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116134883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Patch-based Defenses against Web Fingerprinting Attacks","authors":"Shawn Shan, A. Bhagoji, Haitao Zheng, Ben Y. Zhao","doi":"10.1145/3474369.3486875","DOIUrl":"https://doi.org/10.1145/3474369.3486875","url":null,"abstract":"Anonymity systems like Tor are vulnerable to Website Fingerprinting (WF) attacks, where a local passive eavesdropper infers the victim's activity. WF attacks based on deep learning classifiers have successfully overcome numerous defenses. While recent defenses leveraging adversarial examples offer promise, these adversarial examples can only be computed after the network session has concluded, thus offering users little protection in practical settings. We propose Dolos, a system that modifies user network traffic in real time to successfully evade WF attacks. Dolos injects dummy packets into traffic traces by computing input-agnostic adversarial patches that disrupt the deep learning classifiers used in WF attacks. Patches are then applied to alter and protect user traffic in real time. Importantly, these patches are parameterized by a user-side secret, ensuring that attackers cannot use adversarial training to defeat Dolos. We experimentally demonstrate that Dolos provides >94% protection against state-of-the-art WF attacks under a variety of settings, including adaptive countermeasures. Dolos outperforms prior defenses both in terms of higher protection performance as well as lower bandwidth overhead. Finally, we show that Dolos is provably robust to any attack under specific, but realistic, assumptions on the setting in which the defense is deployed.","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131863707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 2B: Machine Learning for Cybersecurity","authors":"Ambra Demontis","doi":"10.1145/3494695","DOIUrl":"https://doi.org/10.1145/3494695","url":null,"abstract":"","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115151885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spying through Virtual Backgrounds of Video Calls","authors":"Jan Malte Hilgefort, Dan Arp, Konrad Rieck","doi":"10.1145/3474369.3486870","DOIUrl":"https://doi.org/10.1145/3474369.3486870","url":null,"abstract":"Video calls have become an essential part of today's business life, especially due to the Corona pandemic. Several industry branches enable their employees to work from home and collaborate via video conferencing services. While remote work offers benefits for health safety and personal mobility, it also poses privacy risks. Visual content is directly transmitted from the private living environment of employees to third parties, potentially exposing sensitive information. To counter this threat, video conferencing services support replacing the visible environment of a video call with a virtual background. This replacement, however, is imperfect, leaking tiny regions of the real background in video frames. In this paper, we explore how these leaks in virtual backgrounds can be exploited to reconstruct regions of the real environment. To this end, we build on recent techniques of computer vision and derive an approach capable of extracting and aggregating leaked pixels in a video call. In an empirical study with the services Zoom, Webex, and Google Meet, we can demonstrate that the exposed fragments of the reconstructed background are sufficient to spot different objects. From 114 video calls with virtual backgrounds, 35% enable to correctly identify objects in the environment. We conclude that virtual backgrounds provide only limited protection, and alternative defenses are needed.","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126063796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 3: Privacy-Preserving Machine Learning","authors":"Yizheng Chen","doi":"10.1145/3494696","DOIUrl":"https://doi.org/10.1145/3494696","url":null,"abstract":"","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"319 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116290758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 1: Adversarial Machine Learning","authors":"Nicholas Carlini","doi":"10.1145/3494693","DOIUrl":"https://doi.org/10.1145/3494693","url":null,"abstract":"","PeriodicalId":411057,"journal":{"name":"Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121631695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}