Real-Time Evasion Attacks against Deep Learning-Based Anomaly Detection from Distributed System Logs

J. D. Herath, Ping Yang, Guanhua Yan
{"title":"Real-Time Evasion Attacks against Deep Learning-Based Anomaly Detection from Distributed System Logs","authors":"J. D. Herath, Ping Yang, Guanhua Yan","doi":"10.1145/3422337.3447833","DOIUrl":null,"url":null,"abstract":"Distributed system logs, which record states and events that occurred during the execution of a distributed system, provide valuable information for troubleshooting and diagnosis of its operational issues. Due to the complexity of such systems, there have been some recent research efforts on automating anomaly detection from distributed system logs using deep learning models. As these anomaly detection models can also be used to detect malicious activities inside distributed systems, it is important to understand their robustness against evasive manipulations in adversarial environments. Although there are various attacks against deep learning models in domains such as natural language processing and image classification, they cannot be applied directly to evade anomaly detection from distributed system logs. In this work, we explore the adversarial robustness of deep learning-based anomaly detection models on distributed system logs. We propose a real-time attack method called LAM (Log Anomaly Mask) to perturb streaming logs with minimal modifications in an online fashion so that the attacks can evade anomaly detection by even the state-of-the-art deep learning models. To overcome the search space complexity challenge, LAM models the perturber as a reinforcement learning agent that operates in a partially observable environment to predict the best perturbation action. We have evaluated the effectiveness of LAM on two log-based anomaly detection systems for distributed systems: DeepLog and an AutoEncoder-based anomaly detection system. Our experimental results show that LAM significantly reduces the true positive rate of these two models while achieving attack imperceptibility and real-time responsiveness.","PeriodicalId":187272,"journal":{"name":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3422337.3447833","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Distributed system logs, which record states and events that occurred during the execution of a distributed system, provide valuable information for troubleshooting and diagnosis of its operational issues. Due to the complexity of such systems, there have been some recent research efforts on automating anomaly detection from distributed system logs using deep learning models. As these anomaly detection models can also be used to detect malicious activities inside distributed systems, it is important to understand their robustness against evasive manipulations in adversarial environments. Although there are various attacks against deep learning models in domains such as natural language processing and image classification, they cannot be applied directly to evade anomaly detection from distributed system logs. In this work, we explore the adversarial robustness of deep learning-based anomaly detection models on distributed system logs. We propose a real-time attack method called LAM (Log Anomaly Mask) to perturb streaming logs with minimal modifications in an online fashion so that the attacks can evade anomaly detection by even the state-of-the-art deep learning models. To overcome the search space complexity challenge, LAM models the perturber as a reinforcement learning agent that operates in a partially observable environment to predict the best perturbation action. We have evaluated the effectiveness of LAM on two log-based anomaly detection systems for distributed systems: DeepLog and an AutoEncoder-based anomaly detection system. Our experimental results show that LAM significantly reduces the true positive rate of these two models while achieving attack imperceptibility and real-time responsiveness.
基于深度学习的分布式系统日志异常检测的实时规避攻击
分布式系统日志记录了分布式系统执行期间发生的状态和事件,为故障排除和诊断其操作问题提供了有价值的信息。由于这种系统的复杂性,最近有一些研究工作是使用深度学习模型从分布式系统日志中自动检测异常。由于这些异常检测模型也可用于检测分布式系统中的恶意活动,因此了解它们对敌对环境中规避操作的鲁棒性非常重要。尽管在自然语言处理和图像分类等领域存在各种针对深度学习模型的攻击,但它们不能直接用于逃避分布式系统日志的异常检测。在这项工作中,我们探索了基于深度学习的分布式系统日志异常检测模型的对抗鲁棒性。我们提出了一种称为LAM(日志异常掩码)的实时攻击方法,以最小的修改在线方式干扰流日志,以便攻击甚至可以逃避最先进的深度学习模型的异常检测。为了克服搜索空间复杂性的挑战,LAM将摄动器建模为一个强化学习代理,该代理在部分可观察的环境中运行,以预测最佳摄动动作。我们已经评估了LAM在两个基于日志的分布式系统异常检测系统上的有效性:DeepLog和基于autoencoder的异常检测系统。实验结果表明,LAM显著降低了这两种模型的真阳性率,同时实现了攻击不可感知性和实时响应性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信