Ensemble adversarial training based defense against adversarial attacks for machine learning-based intrusion detection system

IF 0.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Muhammad Shahzad Haroon, Husnain Mansoor Ali
{"title":"Ensemble adversarial training based defense against adversarial attacks for machine learning-based intrusion detection system","authors":"Muhammad Shahzad Haroon, Husnain Mansoor Ali","doi":"10.14311/nnw.2023.33.018","DOIUrl":null,"url":null,"abstract":"In this paper, a defence mechanism is proposed against adversarial attacks. The defence is based on an ensemble classifier that is adversarially trained. This is accomplished by generating adversarial attacks from four different attack methods, i.e., Jacobian-based saliency map attack (JSMA), projected gradient descent (PGD), momentum iterative method (MIM), and fast gradient signed method (FGSM). The adversarial examples are used to identify the robust machine-learning algorithms which eventually participate in the ensemble. The adversarial attacks are divided into seen and unseen attacks. To validate our work, the experiments are conducted using NSLKDD, UNSW-NB15 and CICIDS17 datasets. Grid search for the ensemble is used to optimise results. The parameter used for performance evaluations is accuracy, F1 score and AUC score. It is shown that an adversarially trained ensemble classifier produces better results.","PeriodicalId":49765,"journal":{"name":"Neural Network World","volume":"64 1","pages":"0"},"PeriodicalIF":0.7000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Network World","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14311/nnw.2023.33.018","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, a defence mechanism is proposed against adversarial attacks. The defence is based on an ensemble classifier that is adversarially trained. This is accomplished by generating adversarial attacks from four different attack methods, i.e., Jacobian-based saliency map attack (JSMA), projected gradient descent (PGD), momentum iterative method (MIM), and fast gradient signed method (FGSM). The adversarial examples are used to identify the robust machine-learning algorithms which eventually participate in the ensemble. The adversarial attacks are divided into seen and unseen attacks. To validate our work, the experiments are conducted using NSLKDD, UNSW-NB15 and CICIDS17 datasets. Grid search for the ensemble is used to optimise results. The parameter used for performance evaluations is accuracy, F1 score and AUC score. It is shown that an adversarially trained ensemble classifier produces better results.
基于集成对抗训练的机器学习入侵检测系统对抗性攻击防御
本文提出了一种针对对抗性攻击的防御机制。防御是基于对抗训练的集成分类器。这是通过四种不同的攻击方法,即基于jacobian的显著性图攻击(JSMA)、投影梯度下降(PGD)、动量迭代法(MIM)和快速梯度签名法(FGSM)生成对抗性攻击来实现的。对抗性示例用于识别最终参与集成的鲁棒机器学习算法。对抗性攻击分为可见攻击和不可见攻击。为了验证我们的工作,我们使用NSLKDD、UNSW-NB15和CICIDS17数据集进行了实验。使用网格搜索对集合进行优化。用于性能评估的参数是准确性、F1分数和AUC分数。结果表明,对抗训练的集成分类器可以产生更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Network World
Neural Network World 工程技术-计算机:人工智能
CiteScore
1.80
自引率
0.00%
发文量
0
审稿时长
12 months
期刊介绍: Neural Network World is a bimonthly journal providing the latest developments in the field of informatics with attention mainly devoted to the problems of: brain science, theory and applications of neural networks (both artificial and natural), fuzzy-neural systems, methods and applications of evolutionary algorithms, methods of parallel and mass-parallel computing, problems of soft-computing, methods of artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信