{"title":"一种针对rnn网络异常检测的投毒攻击方法","authors":"Jinghui Xu, Yu Wen, Chun Yang, Dan Meng","doi":"10.1109/TrustCom50675.2020.00231","DOIUrl":null,"url":null,"abstract":"In the face of the increasingly complex Internet environment, the traditional intrusion detection system is difficult to cope with the unknown variety of attacks. People hope to find reliable anomaly detection technology to help improve the security of cyberspace. The rapid development of artificial intelligence technology provides new development opportunities for anomaly detection technology, and the anomaly detection system based on deep learning performs well in some studies. However, neural networks are highly dependent on data quality, and a small number of poisoned samples injected into the data set will have a huge impact on the results. The online abnormal threat detection system based on deep learning is likely to be attacked by poisoning due to the need for continuous data collection and training. We propose a poisoning attack method using adversarial samples to resist the anomaly detection system based on an unsupervised deep neural network, which can destroy the neural network with as few samples as possible. We verified the effectiveness of poisoning attacks on the network security data set of los alamos national laboratory and further demonstrated its generality on other abnormal detection data set.","PeriodicalId":221956,"journal":{"name":"2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"An Approach for Poisoning Attacks against RNN-Based Cyber Anomaly Detection\",\"authors\":\"Jinghui Xu, Yu Wen, Chun Yang, Dan Meng\",\"doi\":\"10.1109/TrustCom50675.2020.00231\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the face of the increasingly complex Internet environment, the traditional intrusion detection system is difficult to cope with the unknown variety of attacks. People hope to find reliable anomaly detection technology to help improve the security of cyberspace. The rapid development of artificial intelligence technology provides new development opportunities for anomaly detection technology, and the anomaly detection system based on deep learning performs well in some studies. However, neural networks are highly dependent on data quality, and a small number of poisoned samples injected into the data set will have a huge impact on the results. The online abnormal threat detection system based on deep learning is likely to be attacked by poisoning due to the need for continuous data collection and training. We propose a poisoning attack method using adversarial samples to resist the anomaly detection system based on an unsupervised deep neural network, which can destroy the neural network with as few samples as possible. We verified the effectiveness of poisoning attacks on the network security data set of los alamos national laboratory and further demonstrated its generality on other abnormal detection data set.\",\"PeriodicalId\":221956,\"journal\":{\"name\":\"2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TrustCom50675.2020.00231\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TrustCom50675.2020.00231","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Approach for Poisoning Attacks against RNN-Based Cyber Anomaly Detection
In the face of the increasingly complex Internet environment, the traditional intrusion detection system is difficult to cope with the unknown variety of attacks. People hope to find reliable anomaly detection technology to help improve the security of cyberspace. The rapid development of artificial intelligence technology provides new development opportunities for anomaly detection technology, and the anomaly detection system based on deep learning performs well in some studies. However, neural networks are highly dependent on data quality, and a small number of poisoned samples injected into the data set will have a huge impact on the results. The online abnormal threat detection system based on deep learning is likely to be attacked by poisoning due to the need for continuous data collection and training. We propose a poisoning attack method using adversarial samples to resist the anomaly detection system based on an unsupervised deep neural network, which can destroy the neural network with as few samples as possible. We verified the effectiveness of poisoning attacks on the network security data set of los alamos national laboratory and further demonstrated its generality on other abnormal detection data set.