{"title":"保护基于OC-SVM的IDS免受投毒攻击","authors":"Lu Zhang, R. Cushing, P. Grosso","doi":"10.1109/DSC54232.2022.9888908","DOIUrl":null,"url":null,"abstract":"Machine learning techniques are widely used to detect intrusions in the cyber security field. However, most machine learning models are vulnerable to poisoning attacks, in which malicious samples are injected into the training dataset to manipulate the classifier's performance. In this paper, we first evaluate the accuracy degradation of OC-SVM classifiers with 3 different poisoning strategies with the ADLA-FD public dataset and a real world dataset. Secondly, we propose a saniti-zation mechanism based on the DBSCAN clustering algorithm. In addition, we investigate the influences of different distance metrics and different dimensionality reduction techniques and evaluate the sensitivity of the DBSCAN parameters. The ex-perimental results show that the poisoning attacks can degrade the performance of the OC-SVM classifier to a large degree, with an accuracy equal to 0.5 in most settings. The proposed sanitization method can filter out poisoned samples effectively for both datasets. The accuracy after sanitization is very close or even higher to the original value.","PeriodicalId":368903,"journal":{"name":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Defending OC-SVM based IDS from poisoning attacks\",\"authors\":\"Lu Zhang, R. Cushing, P. Grosso\",\"doi\":\"10.1109/DSC54232.2022.9888908\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning techniques are widely used to detect intrusions in the cyber security field. However, most machine learning models are vulnerable to poisoning attacks, in which malicious samples are injected into the training dataset to manipulate the classifier's performance. In this paper, we first evaluate the accuracy degradation of OC-SVM classifiers with 3 different poisoning strategies with the ADLA-FD public dataset and a real world dataset. Secondly, we propose a saniti-zation mechanism based on the DBSCAN clustering algorithm. In addition, we investigate the influences of different distance metrics and different dimensionality reduction techniques and evaluate the sensitivity of the DBSCAN parameters. The ex-perimental results show that the poisoning attacks can degrade the performance of the OC-SVM classifier to a large degree, with an accuracy equal to 0.5 in most settings. The proposed sanitization method can filter out poisoned samples effectively for both datasets. The accuracy after sanitization is very close or even higher to the original value.\",\"PeriodicalId\":368903,\"journal\":{\"name\":\"2022 IEEE Conference on Dependable and Secure Computing (DSC)\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE Conference on Dependable and Secure Computing (DSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSC54232.2022.9888908\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Conference on Dependable and Secure Computing (DSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSC54232.2022.9888908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Machine learning techniques are widely used to detect intrusions in the cyber security field. However, most machine learning models are vulnerable to poisoning attacks, in which malicious samples are injected into the training dataset to manipulate the classifier's performance. In this paper, we first evaluate the accuracy degradation of OC-SVM classifiers with 3 different poisoning strategies with the ADLA-FD public dataset and a real world dataset. Secondly, we propose a saniti-zation mechanism based on the DBSCAN clustering algorithm. In addition, we investigate the influences of different distance metrics and different dimensionality reduction techniques and evaluate the sensitivity of the DBSCAN parameters. The ex-perimental results show that the poisoning attacks can degrade the performance of the OC-SVM classifier to a large degree, with an accuracy equal to 0.5 in most settings. The proposed sanitization method can filter out poisoned samples effectively for both datasets. The accuracy after sanitization is very close or even higher to the original value.