实现针对网络流量分类的通用和可转移对抗攻击

IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Ruiyang Ding , Lei Sun , Weifei Zang , Leyu Dai , Zhiyi Ding , Bayi Xu
{"title":"实现针对网络流量分类的通用和可转移对抗攻击","authors":"Ruiyang Ding ,&nbsp;Lei Sun ,&nbsp;Weifei Zang ,&nbsp;Leyu Dai ,&nbsp;Zhiyi Ding ,&nbsp;Bayi Xu","doi":"10.1016/j.comnet.2024.110790","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, deep learning technology has shown astonishing potential in many fields, but at the same time, it also hides serious vulnerabilities. In the field of network traffic classification, attackers exploit this vulnerability to add designed perturbations to normal traffic, causing incorrect network traffic classification to implement adversarial attacks. The existing network traffic adversarial attack methods mainly target specific models or sample application scenarios, which have many problems such as poor transferability, high time cost, and low practicality. Therefore, this article proposes a method towards universal and transferable adversarial attacks against network traffic classification, which can not only perform universal adversarial attacks on all samples in the network traffic dataset, but also achieve cross data and cross model transferable adversarial attacks, that is, it has transferable attack effects at both the network traffic data and classification model levels. This method utilizes the geometric characteristics of the network model to design the target loss function and optimize the generation of universal perturbations, resulting in biased learning of features at each layer of the network model, leading to incorrect classification results. Meanwhile, this article conducted universality and transferability adversarial attack verification experiments on standard network traffic datasets of three different classification applications, USTC-TFC2016, ISCX2016, and CICIoT2023, as well as five common network models such as LeNet5. The results show that the proposed method performs universal adversarial attacks on five network models on three datasets, USTC-TFC2016, ISCX2016, and CICIoT2023, with an average attack success rate of over 80 %, 85 %, and 88 %, respectively, and an average time cost of about 0–0.3 ms; And the method proposed in this article has shown good transferable attack performance between five network models and on three network traffic datasets, with transferable attack rates approaching 100 % across different models and datasets, which is more closely related to practical applications.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":null,"pages":null},"PeriodicalIF":4.4000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards universal and transferable adversarial attacks against network traffic classification\",\"authors\":\"Ruiyang Ding ,&nbsp;Lei Sun ,&nbsp;Weifei Zang ,&nbsp;Leyu Dai ,&nbsp;Zhiyi Ding ,&nbsp;Bayi Xu\",\"doi\":\"10.1016/j.comnet.2024.110790\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, deep learning technology has shown astonishing potential in many fields, but at the same time, it also hides serious vulnerabilities. In the field of network traffic classification, attackers exploit this vulnerability to add designed perturbations to normal traffic, causing incorrect network traffic classification to implement adversarial attacks. The existing network traffic adversarial attack methods mainly target specific models or sample application scenarios, which have many problems such as poor transferability, high time cost, and low practicality. Therefore, this article proposes a method towards universal and transferable adversarial attacks against network traffic classification, which can not only perform universal adversarial attacks on all samples in the network traffic dataset, but also achieve cross data and cross model transferable adversarial attacks, that is, it has transferable attack effects at both the network traffic data and classification model levels. This method utilizes the geometric characteristics of the network model to design the target loss function and optimize the generation of universal perturbations, resulting in biased learning of features at each layer of the network model, leading to incorrect classification results. Meanwhile, this article conducted universality and transferability adversarial attack verification experiments on standard network traffic datasets of three different classification applications, USTC-TFC2016, ISCX2016, and CICIoT2023, as well as five common network models such as LeNet5. The results show that the proposed method performs universal adversarial attacks on five network models on three datasets, USTC-TFC2016, ISCX2016, and CICIoT2023, with an average attack success rate of over 80 %, 85 %, and 88 %, respectively, and an average time cost of about 0–0.3 ms; And the method proposed in this article has shown good transferable attack performance between five network models and on three network traffic datasets, with transferable attack rates approaching 100 % across different models and datasets, which is more closely related to practical applications.</div></div>\",\"PeriodicalId\":50637,\"journal\":{\"name\":\"Computer Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389128624006224\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128624006224","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

摘要

近年来,深度学习技术在许多领域展现出惊人的潜力,但同时也隐藏着严重的漏洞。在网络流量分类领域,攻击者利用这一漏洞,在正常流量中加入设计好的扰动,造成网络流量分类错误,从而实施对抗性攻击。现有的网络流量对抗攻击方法主要针对特定模型或样本应用场景,存在可移植性差、时间成本高、实用性低等诸多问题。因此,本文提出了一种针对网络流量分类的通用可转移对抗攻击方法,该方法不仅可以对网络流量数据集中的所有样本进行通用对抗攻击,还可以实现跨数据、跨模型的可转移对抗攻击,即在网络流量数据和分类模型两个层面都具有可转移的攻击效果。该方法利用网络模型的几何特性设计目标损失函数,优化生成普适性扰动,导致网络模型各层的特征学习存在偏差,导致分类结果不正确。同时,本文在 USTC-TFC2016、ISCX2016 和 CICIoT2023 三种不同分类应用的标准网络流量数据集以及 LeNet5 等五种常见网络模型上进行了普适性和可转移性对抗攻击验证实验。结果表明,提出的方法在 USTC-TFC2016、ISCX2016 和 CICIoT2023 三个数据集上对五个网络模型进行了通用对抗攻击,平均攻击成功率分别超过 80 %、85 % 和 88 %,平均时间成本约为 0-0.3毫秒;而本文提出的方法在五个网络模型之间和三个网络流量数据集上都表现出了良好的可迁移攻击性能,不同模型和数据集之间的可迁移攻击率接近100%,更加贴近实际应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Towards universal and transferable adversarial attacks against network traffic classification
In recent years, deep learning technology has shown astonishing potential in many fields, but at the same time, it also hides serious vulnerabilities. In the field of network traffic classification, attackers exploit this vulnerability to add designed perturbations to normal traffic, causing incorrect network traffic classification to implement adversarial attacks. The existing network traffic adversarial attack methods mainly target specific models or sample application scenarios, which have many problems such as poor transferability, high time cost, and low practicality. Therefore, this article proposes a method towards universal and transferable adversarial attacks against network traffic classification, which can not only perform universal adversarial attacks on all samples in the network traffic dataset, but also achieve cross data and cross model transferable adversarial attacks, that is, it has transferable attack effects at both the network traffic data and classification model levels. This method utilizes the geometric characteristics of the network model to design the target loss function and optimize the generation of universal perturbations, resulting in biased learning of features at each layer of the network model, leading to incorrect classification results. Meanwhile, this article conducted universality and transferability adversarial attack verification experiments on standard network traffic datasets of three different classification applications, USTC-TFC2016, ISCX2016, and CICIoT2023, as well as five common network models such as LeNet5. The results show that the proposed method performs universal adversarial attacks on five network models on three datasets, USTC-TFC2016, ISCX2016, and CICIoT2023, with an average attack success rate of over 80 %, 85 %, and 88 %, respectively, and an average time cost of about 0–0.3 ms; And the method proposed in this article has shown good transferable attack performance between five network models and on three network traffic datasets, with transferable attack rates approaching 100 % across different models and datasets, which is more closely related to practical applications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Networks
Computer Networks 工程技术-电信学
CiteScore
10.80
自引率
3.60%
发文量
434
审稿时长
8.6 months
期刊介绍: Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信