Towards effective black-box attacks on DoH tunnel detection systems

IF 4.4 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Linghao Li, Yan Zhu, Yun Li, Wei Qiao, Zelin Cui, Susu Cui, Bo Jiang, Zhigang Lu
{"title":"Towards effective black-box attacks on DoH tunnel detection systems","authors":"Linghao Li,&nbsp;Yan Zhu,&nbsp;Yun Li,&nbsp;Wei Qiao,&nbsp;Zelin Cui,&nbsp;Susu Cui,&nbsp;Bo Jiang,&nbsp;Zhigang Lu","doi":"10.1016/j.comnet.2025.111524","DOIUrl":null,"url":null,"abstract":"<div><div>The introduction of DNS-over-HTTPS (DoH) aims to mitigate the security vulnerabilities of traditional DNS. However, attackers have begun exploiting DoH to establish tunnels for malicious activities. Machine learning (ML)-based network intrusion detection systems (NIDSs) have emerged as a promising approach for detecting DoH tunnel attacks. Paradoxically, these ML models are susceptible to adversarial machine learning attacks. A growing number of researchers are investigating adversarial techniques to circumvent NIDS, yet they neglect the real-world viability of implementing these attack strategies under specific network constraints. To address this gap, we propose a black-box attack framework leveraging the transferability of adversarial samples, along with an adversarial sample generation algorithm called Strategic Feature-Adaptive Adversarial Attack (SFAA) which serves as the black-box attack framework’s core component. SFAA incorporates feature correlations and feature importance to optimize the perturbation direction, thereby generating more realistic adversarial samples. In the context of DoH intrusion attacks, we employ our proposed black-box attack framework to carry out adversarial attacks on commonly used and highly effective ML models. Our experimental results demonstrate that the proposed black-box attack framework effectively evades ML models, and adversarial samples generated by SFAA achieve an attack success rate (ASR) of 63.26%, surpassing state-of-the-art adversarial attacks, including the Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), Projected Gradient Descent (PGD), DeepFool, Carlini &amp; Wagner (C&amp;W), and Jacobian Saliency Map Attack (JSMA). Moreover, we propose a defense framework combining adversarial training and confidence-driven secondary classification, providing a novel paradigm for the robust design of machine learning models to mitigate adversarial attacks.</div></div>","PeriodicalId":50637,"journal":{"name":"Computer Networks","volume":"270 ","pages":"Article 111524"},"PeriodicalIF":4.4000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389128625004918","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

The introduction of DNS-over-HTTPS (DoH) aims to mitigate the security vulnerabilities of traditional DNS. However, attackers have begun exploiting DoH to establish tunnels for malicious activities. Machine learning (ML)-based network intrusion detection systems (NIDSs) have emerged as a promising approach for detecting DoH tunnel attacks. Paradoxically, these ML models are susceptible to adversarial machine learning attacks. A growing number of researchers are investigating adversarial techniques to circumvent NIDS, yet they neglect the real-world viability of implementing these attack strategies under specific network constraints. To address this gap, we propose a black-box attack framework leveraging the transferability of adversarial samples, along with an adversarial sample generation algorithm called Strategic Feature-Adaptive Adversarial Attack (SFAA) which serves as the black-box attack framework’s core component. SFAA incorporates feature correlations and feature importance to optimize the perturbation direction, thereby generating more realistic adversarial samples. In the context of DoH intrusion attacks, we employ our proposed black-box attack framework to carry out adversarial attacks on commonly used and highly effective ML models. Our experimental results demonstrate that the proposed black-box attack framework effectively evades ML models, and adversarial samples generated by SFAA achieve an attack success rate (ASR) of 63.26%, surpassing state-of-the-art adversarial attacks, including the Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), Projected Gradient Descent (PGD), DeepFool, Carlini & Wagner (C&W), and Jacobian Saliency Map Attack (JSMA). Moreover, we propose a defense framework combining adversarial training and confidence-driven secondary classification, providing a novel paradigm for the robust design of machine learning models to mitigate adversarial attacks.
对DoH隧道检测系统的有效黑盒攻击
DoH (DNS-over- https)的引入是为了缓解传统DNS的安全漏洞。然而,攻击者已经开始利用DoH为恶意活动建立隧道。基于机器学习(ML)的网络入侵检测系统(nids)已经成为检测DoH隧道攻击的一种很有前途的方法。矛盾的是,这些机器学习模型很容易受到对抗性机器学习攻击。越来越多的研究人员正在研究规避NIDS的对抗技术,但他们忽视了在特定网络约束下实现这些攻击策略的现实可行性。为了解决这一差距,我们提出了一种利用对抗样本可转移性的黑盒攻击框架,以及一种称为战略特征自适应对抗攻击(SFAA)的对抗样本生成算法,该算法作为黑盒攻击框架的核心组件。SFAA结合特征相关性和特征重要性来优化摄动方向,从而产生更真实的对抗样本。在DoH入侵攻击的背景下,我们采用我们提出的黑盒攻击框架对常用和高效的ML模型进行对抗性攻击。实验结果表明,提出的黑盒攻击框架有效地规避了ML模型,SFAA生成的对抗样本的攻击成功率(ASR)达到63.26%,超过了目前最先进的对抗攻击,包括快速梯度符号法(FGSM)、基本迭代法(BIM)、投影梯度下降法(PGD)、DeepFool、Carlini &;Wagner (C&;W)和Jacobian Saliency Map Attack (JSMA)。此外,我们提出了一个结合对抗性训练和信心驱动的二级分类的防御框架,为机器学习模型的鲁棒设计提供了一个新的范例,以减轻对抗性攻击。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Computer Networks
Computer Networks 工程技术-电信学
CiteScore
10.80
自引率
3.60%
发文量
434
审稿时长
8.6 months
期刊介绍: Computer Networks is an international, archival journal providing a publication vehicle for complete coverage of all topics of interest to those involved in the computer communications networking area. The audience includes researchers, managers and operators of networks as well as designers and implementors. The Editorial Board will consider any material for publication that is of interest to those groups.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信