{"title":"基于集成对抗训练的机器学习入侵检测系统对抗性攻击防御","authors":"Muhammad Shahzad Haroon, Husnain Mansoor Ali","doi":"10.14311/nnw.2023.33.018","DOIUrl":null,"url":null,"abstract":"In this paper, a defence mechanism is proposed against adversarial attacks. The defence is based on an ensemble classifier that is adversarially trained. This is accomplished by generating adversarial attacks from four different attack methods, i.e., Jacobian-based saliency map attack (JSMA), projected gradient descent (PGD), momentum iterative method (MIM), and fast gradient signed method (FGSM). The adversarial examples are used to identify the robust machine-learning algorithms which eventually participate in the ensemble. The adversarial attacks are divided into seen and unseen attacks. To validate our work, the experiments are conducted using NSLKDD, UNSW-NB15 and CICIDS17 datasets. Grid search for the ensemble is used to optimise results. The parameter used for performance evaluations is accuracy, F1 score and AUC score. It is shown that an adversarially trained ensemble classifier produces better results.","PeriodicalId":49765,"journal":{"name":"Neural Network World","volume":"64 1","pages":"0"},"PeriodicalIF":0.7000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ensemble adversarial training based defense against adversarial attacks for machine learning-based intrusion detection system\",\"authors\":\"Muhammad Shahzad Haroon, Husnain Mansoor Ali\",\"doi\":\"10.14311/nnw.2023.33.018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a defence mechanism is proposed against adversarial attacks. The defence is based on an ensemble classifier that is adversarially trained. This is accomplished by generating adversarial attacks from four different attack methods, i.e., Jacobian-based saliency map attack (JSMA), projected gradient descent (PGD), momentum iterative method (MIM), and fast gradient signed method (FGSM). The adversarial examples are used to identify the robust machine-learning algorithms which eventually participate in the ensemble. The adversarial attacks are divided into seen and unseen attacks. To validate our work, the experiments are conducted using NSLKDD, UNSW-NB15 and CICIDS17 datasets. Grid search for the ensemble is used to optimise results. The parameter used for performance evaluations is accuracy, F1 score and AUC score. It is shown that an adversarially trained ensemble classifier produces better results.\",\"PeriodicalId\":49765,\"journal\":{\"name\":\"Neural Network World\",\"volume\":\"64 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Network World\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14311/nnw.2023.33.018\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Network World","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14311/nnw.2023.33.018","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Ensemble adversarial training based defense against adversarial attacks for machine learning-based intrusion detection system
In this paper, a defence mechanism is proposed against adversarial attacks. The defence is based on an ensemble classifier that is adversarially trained. This is accomplished by generating adversarial attacks from four different attack methods, i.e., Jacobian-based saliency map attack (JSMA), projected gradient descent (PGD), momentum iterative method (MIM), and fast gradient signed method (FGSM). The adversarial examples are used to identify the robust machine-learning algorithms which eventually participate in the ensemble. The adversarial attacks are divided into seen and unseen attacks. To validate our work, the experiments are conducted using NSLKDD, UNSW-NB15 and CICIDS17 datasets. Grid search for the ensemble is used to optimise results. The parameter used for performance evaluations is accuracy, F1 score and AUC score. It is shown that an adversarially trained ensemble classifier produces better results.
期刊介绍:
Neural Network World is a bimonthly journal providing the latest developments in the field of informatics with attention mainly devoted to the problems of:
brain science,
theory and applications of neural networks (both artificial and natural),
fuzzy-neural systems,
methods and applications of evolutionary algorithms,
methods of parallel and mass-parallel computing,
problems of soft-computing,
methods of artificial intelligence.