基于深度学习的网络流量分类方法的对抗性攻击

Meimei Li, Yi Tian Xu, Nan Li, Zhongfeng Jin
{"title":"基于深度学习的网络流量分类方法的对抗性攻击","authors":"Meimei Li, Yi Tian Xu, Nan Li, Zhongfeng Jin","doi":"10.1109/TrustCom56396.2022.00154","DOIUrl":null,"url":null,"abstract":"The network traffic data is easily monitored and obtained by attackers. Attacks against different network traffic threaten the environment of the intranet. Deep learning methods have been widely used to classify network traffic for their high classification performance. The application of adversarial samples in computer vision confirms that deep learning methods are flawed, allowing existing methods to generate incorrect results with high confidence. In this paper, the adversarial samples are used on the network traffic classification model, causing the CNN model to produce incorrect classification results for network traffic. By training the classification model adversarially, we validate the training effect and improve the classification accuracy by means of the FGSM attack method. By using the adversarial samples to the network traffic data, our approach enables proactive defence against intranet eavesdropping before the attack occurs by influencing the attacker’s classification model to misclassify.","PeriodicalId":276379,"journal":{"name":"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Adversarial Attacks on Deep Learning-Based Methods for Network Traffic Classification\",\"authors\":\"Meimei Li, Yi Tian Xu, Nan Li, Zhongfeng Jin\",\"doi\":\"10.1109/TrustCom56396.2022.00154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The network traffic data is easily monitored and obtained by attackers. Attacks against different network traffic threaten the environment of the intranet. Deep learning methods have been widely used to classify network traffic for their high classification performance. The application of adversarial samples in computer vision confirms that deep learning methods are flawed, allowing existing methods to generate incorrect results with high confidence. In this paper, the adversarial samples are used on the network traffic classification model, causing the CNN model to produce incorrect classification results for network traffic. By training the classification model adversarially, we validate the training effect and improve the classification accuracy by means of the FGSM attack method. By using the adversarial samples to the network traffic data, our approach enables proactive defence against intranet eavesdropping before the attack occurs by influencing the attacker’s classification model to misclassify.\",\"PeriodicalId\":276379,\"journal\":{\"name\":\"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TrustCom56396.2022.00154\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TrustCom56396.2022.00154","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

网络流量数据容易被攻击者监控和获取。针对不同网络流量的攻击威胁着内网环境。深度学习方法以其优异的分类性能被广泛应用于网络流量分类。对抗性样本在计算机视觉中的应用证实了深度学习方法的缺陷,允许现有方法以高置信度生成不正确的结果。本文在网络流量分类模型中使用了对抗性样本,导致CNN模型对网络流量产生错误的分类结果。通过对分类模型进行对抗性训练,验证了训练效果,提高了FGSM攻击方法的分类准确率。通过对网络流量数据使用对抗性样本,我们的方法可以在攻击发生之前通过影响攻击者的分类模型来进行错误分类,从而实现对内网窃听的主动防御。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Adversarial Attacks on Deep Learning-Based Methods for Network Traffic Classification
The network traffic data is easily monitored and obtained by attackers. Attacks against different network traffic threaten the environment of the intranet. Deep learning methods have been widely used to classify network traffic for their high classification performance. The application of adversarial samples in computer vision confirms that deep learning methods are flawed, allowing existing methods to generate incorrect results with high confidence. In this paper, the adversarial samples are used on the network traffic classification model, causing the CNN model to produce incorrect classification results for network traffic. By training the classification model adversarially, we validate the training effect and improve the classification accuracy by means of the FGSM attack method. By using the adversarial samples to the network traffic data, our approach enables proactive defence against intranet eavesdropping before the attack occurs by influencing the attacker’s classification model to misclassify.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信