{"title":"基于深度学习的网络流量分类方法的对抗性攻击","authors":"Meimei Li, Yi Tian Xu, Nan Li, Zhongfeng Jin","doi":"10.1109/TrustCom56396.2022.00154","DOIUrl":null,"url":null,"abstract":"The network traffic data is easily monitored and obtained by attackers. Attacks against different network traffic threaten the environment of the intranet. Deep learning methods have been widely used to classify network traffic for their high classification performance. The application of adversarial samples in computer vision confirms that deep learning methods are flawed, allowing existing methods to generate incorrect results with high confidence. In this paper, the adversarial samples are used on the network traffic classification model, causing the CNN model to produce incorrect classification results for network traffic. By training the classification model adversarially, we validate the training effect and improve the classification accuracy by means of the FGSM attack method. By using the adversarial samples to the network traffic data, our approach enables proactive defence against intranet eavesdropping before the attack occurs by influencing the attacker’s classification model to misclassify.","PeriodicalId":276379,"journal":{"name":"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Adversarial Attacks on Deep Learning-Based Methods for Network Traffic Classification\",\"authors\":\"Meimei Li, Yi Tian Xu, Nan Li, Zhongfeng Jin\",\"doi\":\"10.1109/TrustCom56396.2022.00154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The network traffic data is easily monitored and obtained by attackers. Attacks against different network traffic threaten the environment of the intranet. Deep learning methods have been widely used to classify network traffic for their high classification performance. The application of adversarial samples in computer vision confirms that deep learning methods are flawed, allowing existing methods to generate incorrect results with high confidence. In this paper, the adversarial samples are used on the network traffic classification model, causing the CNN model to produce incorrect classification results for network traffic. By training the classification model adversarially, we validate the training effect and improve the classification accuracy by means of the FGSM attack method. By using the adversarial samples to the network traffic data, our approach enables proactive defence against intranet eavesdropping before the attack occurs by influencing the attacker’s classification model to misclassify.\",\"PeriodicalId\":276379,\"journal\":{\"name\":\"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TrustCom56396.2022.00154\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TrustCom56396.2022.00154","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adversarial Attacks on Deep Learning-Based Methods for Network Traffic Classification
The network traffic data is easily monitored and obtained by attackers. Attacks against different network traffic threaten the environment of the intranet. Deep learning methods have been widely used to classify network traffic for their high classification performance. The application of adversarial samples in computer vision confirms that deep learning methods are flawed, allowing existing methods to generate incorrect results with high confidence. In this paper, the adversarial samples are used on the network traffic classification model, causing the CNN model to produce incorrect classification results for network traffic. By training the classification model adversarially, we validate the training effect and improve the classification accuracy by means of the FGSM attack method. By using the adversarial samples to the network traffic data, our approach enables proactive defence against intranet eavesdropping before the attack occurs by influencing the attacker’s classification model to misclassify.