Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security最新文献

筛选
英文 中文
Session details: Session 3: Machine Learning for Security and Privacy 会议详情:会议3:机器学习的安全和隐私
Sadia Afroz
{"title":"Session details: Session 3: Machine Learning for Security and Privacy","authors":"Sadia Afroz","doi":"10.1145/3433220","DOIUrl":"https://doi.org/10.1145/3433220","url":null,"abstract":"","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128541264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Session 2: Malware Detection 会话详细信息:会话2:恶意软件检测
Ambra Demontis
{"title":"Session details: Session 2: Malware Detection","authors":"Ambra Demontis","doi":"10.1145/3433219","DOIUrl":"https://doi.org/10.1145/3433219","url":null,"abstract":"","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121676937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disabling Backdoor and Identifying Poison Data by using Knowledge Distillation in Backdoor Attacks on Deep Neural Networks 基于知识蒸馏的深度神经网络后门攻击去后门与有毒数据识别
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-11-09 DOI: 10.1145/3411508.3421375
Kota Yoshida, T. Fujino
{"title":"Disabling Backdoor and Identifying Poison Data by using Knowledge Distillation in Backdoor Attacks on Deep Neural Networks","authors":"Kota Yoshida, T. Fujino","doi":"10.1145/3411508.3421375","DOIUrl":"https://doi.org/10.1145/3411508.3421375","url":null,"abstract":"Backdoor attacks are poisoning attacks and serious threats to deep neural networks. When an adversary mixes poison data into a training dataset, the training dataset is called a poison training dataset. A model trained with the poison training dataset becomes a backdoor model and it achieves high stealthiness and attack-feasibility. The backdoor model classifies only a poison image into an adversarial target class and other images into the correct classes. We propose an additional procedure to our previously proposed countermeasure against backdoor attacks by using knowledge distillation. Our procedure removes poison data from a poison training dataset and recovers the accuracy of the distillation model. Our countermeasure differs from previous ones in that it does not require detecting and identifying backdoor models, backdoor neurons, and poison data. A characteristic assumption in our defense scenario is that the defender can collect clean images without labels. A defender distills clean knowledge from a backdoor model (teacher model) to a distillation model (student model) with knowledge distillation. Subsequently, the defender removes poison-data candidates from the poison training dataset by comparing the predictions of the backdoor and distillation models. The defender fine-tunes the distillation model with the detoxified training dataset to improve classification accuracy. We evaluated our countermeasure by using two datasets. The backdoor is disabled by distillation and fine-tuning further improves the classification accuracy of the distillation model. The fine-tuning model achieved comparable accuracy to a baseline model when the number of clean images for a distillation dataset was more than 13% of the training data. Our results indicate that our countermeasure can be applied for general image-classification tasks and that it works well whether the defender's received training dataset is a poison dataset or not.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129216222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
E-ABS: Extending the Analysis-By-Synthesis Robust Classification Model to More Complex Image Domains E-ABS:将综合分析鲁棒分类模型扩展到更复杂的图像域
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-11-09 DOI: 10.1145/3411508.3421382
An Ju, D. Wagner
{"title":"E-ABS: Extending the Analysis-By-Synthesis Robust Classification Model to More Complex Image Domains","authors":"An Ju, D. Wagner","doi":"10.1145/3411508.3421382","DOIUrl":"https://doi.org/10.1145/3411508.3421382","url":null,"abstract":"Conditional generative models, such as Schott et al.'s Analysis-by-Synthesis (ABS), have state-of-the-art robustness on MNIST, but fail in more challenging datasets. In this paper, we present E-ABS, an improvement on ABS that achieves state-of-the-art robustness on SVHN. E-ABS gives more reliable class-conditional likelihood estimations on both in-distribution and out-of-distribution samples than ABS. Theoretically, E-ABS preserves ABS's key features for robustness; thus, we show that E-ABS has similar certified robustness as ABS. Empirically, E-ABS outperforms both ABS and adversarial training on SVHN and a traffic sign dataset, achieving state-of-the-art robustness on these two real-world tasks. Our work shows a connection between ABS-like models and some recent advances on generative models, suggesting that ABS-like models are a promising direction for defending adversarial examples.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127349771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Risk-based Authentication Based on Network Latency Profiling 基于网络延迟分析的风险认证
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-11-09 DOI: 10.1145/3411508.3421377
Esteban Rivera, Lizzy Tengana, Jesus Solano, Alejandra Castelblanco, Christian Lopez, Martín Ochoa
{"title":"Risk-based Authentication Based on Network Latency Profiling","authors":"Esteban Rivera, Lizzy Tengana, Jesus Solano, Alejandra Castelblanco, Christian Lopez, Martín Ochoa","doi":"10.1145/3411508.3421377","DOIUrl":"https://doi.org/10.1145/3411508.3421377","url":null,"abstract":"Impersonation attacks against web authentication servers have been increasing in complexity over the last decade. Tunnelling services, such as VPNs or proxies, can be for instance used to faithfully impersonate victims in foreign countries. In this paper we study the detection of user authentication attacks involving network tunnelling geolocation deception. For that purpose we explore different models to profile a user based on network latencies. We design a classical machine learning model and a deep learning model to profile web resource loading times collected on client-side. In order to test our approach we profiled network latencies for 86 real users located around the globe. We show that our proposed novel network profiling is able to detect up to 88.3% of attacks using VPN tunneling schemes","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130426963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
eNNclave eNNclave
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-11-09 DOI: 10.1145/3411508.3421376
Alexander Schlögl, Rainer Böhme
{"title":"eNNclave","authors":"Alexander Schlögl, Rainer Böhme","doi":"10.1145/3411508.3421376","DOIUrl":"https://doi.org/10.1145/3411508.3421376","url":null,"abstract":"Outsourcing machine learning inference creates a confidentiality dilemma: either the client has to trust the server with potentially sensitive input data, or the server has to share his commercially valuable model. Known remedies include homomorphic encryption, multi-party computation, or placing the entire model in a trusted enclave. None of these are suitable for large models. For two relevant use cases, we show that it is possible to keep all confidential model parameters in the last (dense) layers of deep neural networks. This allows us to split the model such that the confidential parts fit into a trusted enclave on the client side. We present the eNNclave toolchain to cut TensorFlow models at any layer, splitting them into public and enclaved layers. This preserves TensorFlow's performance optimizations and hardware support for public layers, while keeping the parameters of the enclaved layers private. Evaluations on several machine learning tasks spanning multiple domains show that fast inference is possible while keeping the sensitive model parameters confidential. Accuracy results are close to the baseline where all layers carry sensitive information and confirm our approach is practical.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132031320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
SCRAP
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-11-09 DOI: 10.1145/3411508.3421378
Jesus Solano, Christian Lopez, Esteban Rivera, Alejandra Castelblanco, Lizzy Tengana, Martín Ochoa
{"title":"SCRAP","authors":"Jesus Solano, Christian Lopez, Esteban Rivera, Alejandra Castelblanco, Lizzy Tengana, Martín Ochoa","doi":"10.1145/3411508.3421378","DOIUrl":"https://doi.org/10.1145/3411508.3421378","url":null,"abstract":"Adversarial attacks have gained popularity recently due to their simplicity and impact. Their applicability to diverse security scenarios is however less understood. In particular, in some scenarios, attackers may come up naturally with ad-hoc black-box attack techniques inspired directly on characteristics of the problem space rather than using generic adversarial techniques. In this paper we explore an intuitive attack technique for Mouse-based Behavioral Biometrics and compare its effectiveness against adversarial machine learning attacks. We show that attacks leveraging on domain knowledge have higher transferability when applied to various machine-learning techniques and are also more difficult to defend against. We also propose countermeasures against such attacks and discuss their effectiveness.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114259809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Session details: Session 1: Adversarial Machine Learning 会议详情:会议1:对抗性机器学习
Nicholas Carlini
{"title":"Session details: Session 1: Adversarial Machine Learning","authors":"Nicholas Carlini","doi":"10.1145/3433218","DOIUrl":"https://doi.org/10.1145/3433218","url":null,"abstract":"","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121406220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mind the Gap: On Bridging the Semantic Gap between Machine Learning and Malware Analysis 注意差距:关于弥合机器学习和恶意软件分析之间的语义差距
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-11-01 DOI: 10.1145/3411508.3421373
Michael R. Smith, Nicholas T. Johnson, J. Ingram, A. Carbajal, Bridget I. Haus, Eva Domschot, Ramyaa, Christopher C. Lamb, Stephen J Verzi, W. Kegelmeyer
{"title":"Mind the Gap: On Bridging the Semantic Gap between Machine Learning and Malware Analysis","authors":"Michael R. Smith, Nicholas T. Johnson, J. Ingram, A. Carbajal, Bridget I. Haus, Eva Domschot, Ramyaa, Christopher C. Lamb, Stephen J Verzi, W. Kegelmeyer","doi":"10.1145/3411508.3421373","DOIUrl":"https://doi.org/10.1145/3411508.3421373","url":null,"abstract":"Machine learning (ML) techniques are being used to detect increasing amounts of malware and variants. Despite successful applications of ML, we hypothesize that the full potential of ML is not realized in malware analysis (MA) due to a semantic gap between the ML and MA communities---as demonstrated in the data that is used. Due in part to the available data, ML has primarily focused on detection whereas MA is also interested in identifying behaviors. We review existing open-source malware datasets used in ML and find a lack of behavioral information that could facilitate stronger impact by ML in MA. As a first step in bridging this gap, we label existing data with behavioral information using open-source MA reports---1) altering the analysis from identifying malware to identifying behaviors, 2)~aligning ML better with MA, and 3)~allowing ML models to generalize to novel malware in a zero/few-shot learning manner. We classify the behavior of a malware family not seen during training using transfer learning from a state-of-the-art model for malware family classification and achieve 57% - 84% accuracy on behavioral identification but fail to outperform the baseline set by a majority class predictor. This highlights opportunities for improvement on this task related to the data representation, the need for malware specific ML techniques, and a larger training set of malware samples labeled with behaviors.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121239181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Where Does the Robustness Come from?: A Study of the Transformation-based Ensemble Defence 稳健性从何而来?基于转换的集成防御研究
Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security Pub Date : 2020-09-28 DOI: 10.1145/3411508.3421380
Chang Liao, Yao Cheng, Chengfang Fang, Jie Shi
{"title":"Where Does the Robustness Come from?: A Study of the Transformation-based Ensemble Defence","authors":"Chang Liao, Yao Cheng, Chengfang Fang, Jie Shi","doi":"10.1145/3411508.3421380","DOIUrl":"https://doi.org/10.1145/3411508.3421380","url":null,"abstract":"This paper aims to provide a thorough study on the effectiveness of the transformation-based ensemble defence for image classification and its reasons. It has been empirically shown that they can enhance the robustness against evasion attacks, while there is little analysis on the reasons. In particular, it is not clear whether the robustness improvement is a result of transformation or ensemble. In this paper, we design two adaptive attacks to better evaluate the transformation-based ensemble defence. We conduct experiments to show that 1) the transferability of adversarial examples exists among the models trained on data records after different reversible transformations; 2) the robustness gained through transformation-based ensemble is limited; 3) this limited robustness is mainly from the irreversible transformations rather than the ensemble of a number of models; and 4) blindly increasing the number of sub-models in a transformation-based ensemble does not bring extra robustness gain.","PeriodicalId":132987,"journal":{"name":"Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130389516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信