{"title":"评估对抗训练对ids和gan的影响","authors":"Hassan Chaitou, T. Robert, J. Leneutre, L. Pautet","doi":"10.1109/CSR51186.2021.9527949","DOIUrl":null,"url":null,"abstract":"Deep neural network-based Intrusion Detection Systems (IDSs) are gaining popularity to improve anomaly detection accuracy and robustness. Yet, Deep neural network (DNN) models have been shown to be vulnerable to adversarial attacks. An attacker can use a generator, here a Generative Adversarial Network, to alter an attack so that the IDS model misclassify it as normal network traffic. There is a race between adversarial attacks and mechanisms to make robust IDSs, like Adversarial Training. To our knowledge, no study thoroughly assesses how attack generators or IDS training is sensitive to parameters controlling resources spent during training. Such results provide insights on how much to spend on IDS training. This paper presents the outcome of this assessment for GANs vs adversarial training. Interestingly, it shows that GANs’ evasion capabilities are either very good or poor, with almost no average cases. Resources impact the likelihood of obtaining an efficient generator.","PeriodicalId":253300,"journal":{"name":"2021 IEEE International Conference on Cyber Security and Resilience (CSR)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Assessing adversarial training effect on IDSs and GANs\",\"authors\":\"Hassan Chaitou, T. Robert, J. Leneutre, L. Pautet\",\"doi\":\"10.1109/CSR51186.2021.9527949\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural network-based Intrusion Detection Systems (IDSs) are gaining popularity to improve anomaly detection accuracy and robustness. Yet, Deep neural network (DNN) models have been shown to be vulnerable to adversarial attacks. An attacker can use a generator, here a Generative Adversarial Network, to alter an attack so that the IDS model misclassify it as normal network traffic. There is a race between adversarial attacks and mechanisms to make robust IDSs, like Adversarial Training. To our knowledge, no study thoroughly assesses how attack generators or IDS training is sensitive to parameters controlling resources spent during training. Such results provide insights on how much to spend on IDS training. This paper presents the outcome of this assessment for GANs vs adversarial training. Interestingly, it shows that GANs’ evasion capabilities are either very good or poor, with almost no average cases. Resources impact the likelihood of obtaining an efficient generator.\",\"PeriodicalId\":253300,\"journal\":{\"name\":\"2021 IEEE International Conference on Cyber Security and Resilience (CSR)\",\"volume\":\"36 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Cyber Security and Resilience (CSR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSR51186.2021.9527949\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Cyber Security and Resilience (CSR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSR51186.2021.9527949","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Assessing adversarial training effect on IDSs and GANs
Deep neural network-based Intrusion Detection Systems (IDSs) are gaining popularity to improve anomaly detection accuracy and robustness. Yet, Deep neural network (DNN) models have been shown to be vulnerable to adversarial attacks. An attacker can use a generator, here a Generative Adversarial Network, to alter an attack so that the IDS model misclassify it as normal network traffic. There is a race between adversarial attacks and mechanisms to make robust IDSs, like Adversarial Training. To our knowledge, no study thoroughly assesses how attack generators or IDS training is sensitive to parameters controlling resources spent during training. Such results provide insights on how much to spend on IDS training. This paper presents the outcome of this assessment for GANs vs adversarial training. Interestingly, it shows that GANs’ evasion capabilities are either very good or poor, with almost no average cases. Resources impact the likelihood of obtaining an efficient generator.