{"title":"网络入侵检测系统机器学习模型中投毒和逃避攻击的敏感性分析","authors":"Kevin Talty, J. Stockdale, Nathaniel D. Bastian","doi":"10.1109/MILCOM52596.2021.9652959","DOIUrl":null,"url":null,"abstract":"As the demand for data has increased, we have witnessed a surge in the use of machine learning to help aid industry and government in making sense of massive amounts of data and, subsequently, making predictions and decisions. For the military, this surge has manifested itself in the Internet of Battlefield Things. The pervasive nature of data on today's battlefield will allow machine learning models to increase soldier lethality and survivability. However, machine learning models are predicated upon the assumptions that the data upon which these machine learning models are being trained is truthful and the machine learning models are not compromised. These assumptions surrounding the quality of data and models cannot be the status-quo going forward as attackers establish novel methods to exploit machine learning models for their benefit. These novel attack methods can be described as adversarial machine learning (AML). These attacks allow an attacker to unsuspectingly alter a machine learning model before and after model training in order to degrade a model's ability to detect malicious activity. In this paper, we show how AML, by poisoning data sets and evading well trained models, affect machine learning models' ability to function as Network Intrusion Detection Systems (NIDS). Finally, we highlight why evasion attacks are especially effective in this setting and discuss some of the causes for this degradation of model effectiveness.","PeriodicalId":187645,"journal":{"name":"MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"A Sensitivity Analysis of Poisoning and Evasion Attacks in Network Intrusion Detection System Machine Learning Models\",\"authors\":\"Kevin Talty, J. Stockdale, Nathaniel D. Bastian\",\"doi\":\"10.1109/MILCOM52596.2021.9652959\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As the demand for data has increased, we have witnessed a surge in the use of machine learning to help aid industry and government in making sense of massive amounts of data and, subsequently, making predictions and decisions. For the military, this surge has manifested itself in the Internet of Battlefield Things. The pervasive nature of data on today's battlefield will allow machine learning models to increase soldier lethality and survivability. However, machine learning models are predicated upon the assumptions that the data upon which these machine learning models are being trained is truthful and the machine learning models are not compromised. These assumptions surrounding the quality of data and models cannot be the status-quo going forward as attackers establish novel methods to exploit machine learning models for their benefit. These novel attack methods can be described as adversarial machine learning (AML). These attacks allow an attacker to unsuspectingly alter a machine learning model before and after model training in order to degrade a model's ability to detect malicious activity. In this paper, we show how AML, by poisoning data sets and evading well trained models, affect machine learning models' ability to function as Network Intrusion Detection Systems (NIDS). Finally, we highlight why evasion attacks are especially effective in this setting and discuss some of the causes for this degradation of model effectiveness.\",\"PeriodicalId\":187645,\"journal\":{\"name\":\"MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MILCOM52596.2021.9652959\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MILCOM52596.2021.9652959","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Sensitivity Analysis of Poisoning and Evasion Attacks in Network Intrusion Detection System Machine Learning Models
As the demand for data has increased, we have witnessed a surge in the use of machine learning to help aid industry and government in making sense of massive amounts of data and, subsequently, making predictions and decisions. For the military, this surge has manifested itself in the Internet of Battlefield Things. The pervasive nature of data on today's battlefield will allow machine learning models to increase soldier lethality and survivability. However, machine learning models are predicated upon the assumptions that the data upon which these machine learning models are being trained is truthful and the machine learning models are not compromised. These assumptions surrounding the quality of data and models cannot be the status-quo going forward as attackers establish novel methods to exploit machine learning models for their benefit. These novel attack methods can be described as adversarial machine learning (AML). These attacks allow an attacker to unsuspectingly alter a machine learning model before and after model training in order to degrade a model's ability to detect malicious activity. In this paper, we show how AML, by poisoning data sets and evading well trained models, affect machine learning models' ability to function as Network Intrusion Detection Systems (NIDS). Finally, we highlight why evasion attacks are especially effective in this setting and discuss some of the causes for this degradation of model effectiveness.