Deep Models with Differential Privacy for Distributed Web Attack Detection

A. Tran, T. Luong, Xuan Sang Pham, Thi-Luong Tran
{"title":"Deep Models with Differential Privacy for Distributed Web Attack Detection","authors":"A. Tran, T. Luong, Xuan Sang Pham, Thi-Luong Tran","doi":"10.1109/KSE56063.2022.9953788","DOIUrl":null,"url":null,"abstract":"The complexity of today’s web applications entails many security risks, mainly targeted attacks on zero-day vulnerabilities. New attack types often disable the detection capabilities of intrusion detection systems (IDS) and web application firewalls (WAFs) based on traditional pattern matching rules. Therefore, the need for new generation WAF systems using machine learning and deep learning technologies is urgent today. Deep learning models require an enormous amount of input data to be able to train the models accurately, leading to the very resource-intensive problem of collecting and labeling data. In addition, web request data is often sensitive or private and should not be disclosed, imposing a challenge to develop high-accuracy deep learning and machine learning models. This paper proposes a privacy-preserving distributed training process for the web attack detection deep learning model. The proposed model allows the participants to share the training process to improve the accuracy of the deep model for web attack detection while preserving the privacy of the local data and local model parameters. The proposed model uses the technique of adding noise to the shared parameter to ensure differential privacy. The participants will train the local detection model and share intermediate training parameters with some noise that increases the privacy of the training process. The results evaluated on the CSIC 2010 benchmark dataset show that the detection accuracy is more than 98%, which is close to the model that does not guarantee privacy and is much higher than the maximum accuracy of all non-data-sharing local models.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/KSE56063.2022.9953788","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The complexity of today’s web applications entails many security risks, mainly targeted attacks on zero-day vulnerabilities. New attack types often disable the detection capabilities of intrusion detection systems (IDS) and web application firewalls (WAFs) based on traditional pattern matching rules. Therefore, the need for new generation WAF systems using machine learning and deep learning technologies is urgent today. Deep learning models require an enormous amount of input data to be able to train the models accurately, leading to the very resource-intensive problem of collecting and labeling data. In addition, web request data is often sensitive or private and should not be disclosed, imposing a challenge to develop high-accuracy deep learning and machine learning models. This paper proposes a privacy-preserving distributed training process for the web attack detection deep learning model. The proposed model allows the participants to share the training process to improve the accuracy of the deep model for web attack detection while preserving the privacy of the local data and local model parameters. The proposed model uses the technique of adding noise to the shared parameter to ensure differential privacy. The participants will train the local detection model and share intermediate training parameters with some noise that increases the privacy of the training process. The results evaluated on the CSIC 2010 benchmark dataset show that the detection accuracy is more than 98%, which is close to the model that does not guarantee privacy and is much higher than the maximum accuracy of all non-data-sharing local models.
基于差分隐私的分布式Web攻击检测深度模型
当今web应用程序的复杂性带来了许多安全风险,主要是针对零日漏洞的攻击。新的攻击类型往往使基于传统模式匹配规则的入侵检测系统(IDS)和web应用防火墙(waf)的检测能力失效。因此,今天迫切需要使用机器学习和深度学习技术的新一代WAF系统。深度学习模型需要大量的输入数据才能准确地训练模型,这导致了收集和标记数据的资源密集型问题。此外,web请求数据通常是敏感或私有的,不应该被披露,这给开发高精度深度学习和机器学习模型带来了挑战。针对web攻击检测深度学习模型,提出了一种保护隐私的分布式训练过程。该模型允许参与者共享训练过程,以提高深度模型用于web攻击检测的准确性,同时保留本地数据和本地模型参数的隐私性。该模型采用在共享参数中加入噪声的技术来保证差分隐私。参与者将训练局部检测模型并共享中间训练参数,其中带有一些噪声,从而增加了训练过程的隐私性。在CSIC 2010基准数据集上的评估结果表明,该方法的检测准确率达到98%以上,接近于不保证隐私的模型,远高于所有非数据共享局部模型的最高准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信