基于小波包分解的基于频率的后门攻击。

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Zhengyao Song , Yongqiang Li , Danni Yuan , Li Liu , Shaokui Wei , Baoyuan Wu
{"title":"基于小波包分解的基于频率的后门攻击。","authors":"Zhengyao Song ,&nbsp;Yongqiang Li ,&nbsp;Danni Yuan ,&nbsp;Li Liu ,&nbsp;Shaokui Wei ,&nbsp;Baoyuan Wu","doi":"10.1016/j.neunet.2025.108074","DOIUrl":null,"url":null,"abstract":"<div><div>This work explores backdoor attack, which is an emerging security threat against deep neural networks (DNNs). The adversary aims to inject a backdoor into the model by manipulating a portion of training samples, such that the backdoor could be activated by a particular trigger to make a target prediction at inference. Currently, existing backdoor attacks often require moderate or high poisoning ratios to achieve the desired attack performance, but making them susceptible to some advanced backdoor defenses (<span><math><mrow><mi>e</mi><mo>.</mo><mi>g</mi><mo>.</mo></mrow></math></span>, poisoned sample detection). One possible solution to this dilemma is enhancing the attack performance at low poisoning ratios, which has been rarely studied due to its high challenge. To achieve this goal, we propose an innovative frequency-based backdoor attack via wavelet packet decomposition (WPD), which could finely decompose the original image into multiple sub-spectrograms with semantic information. It facilitates us to accurately identify the most critical frequency regions to effectively insert the trigger into the victim image, such that the trigger information could be sufficiently learned to form the backdoor. The proposed attack stands out for its exceptional effectiveness, stealthiness, and resistance at an extremely low poisoning ratio. Notably, it achieves the <span><math><mrow><mn>98.12</mn><mspace></mspace><mo>%</mo></mrow></math></span> attack success rate on CIFAR-10 with an extremely low poisoning ratio of <span><math><mrow><mn>0.004</mn><mspace></mspace><mo>%</mo></mrow></math></span> (<em>i.e.</em>, only 2 poisoned samples among 50,000 training samples), and bypasses several advanced backdoor defenses. Besides, we provide more extensive experiments to demonstrate the efficacy of the proposed method, as well as in-depth analyses to explain its underlying mechanism.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108074"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"WPDA: frequency-based backdoor attack with wavelet packet decomposition\",\"authors\":\"Zhengyao Song ,&nbsp;Yongqiang Li ,&nbsp;Danni Yuan ,&nbsp;Li Liu ,&nbsp;Shaokui Wei ,&nbsp;Baoyuan Wu\",\"doi\":\"10.1016/j.neunet.2025.108074\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This work explores backdoor attack, which is an emerging security threat against deep neural networks (DNNs). The adversary aims to inject a backdoor into the model by manipulating a portion of training samples, such that the backdoor could be activated by a particular trigger to make a target prediction at inference. Currently, existing backdoor attacks often require moderate or high poisoning ratios to achieve the desired attack performance, but making them susceptible to some advanced backdoor defenses (<span><math><mrow><mi>e</mi><mo>.</mo><mi>g</mi><mo>.</mo></mrow></math></span>, poisoned sample detection). One possible solution to this dilemma is enhancing the attack performance at low poisoning ratios, which has been rarely studied due to its high challenge. To achieve this goal, we propose an innovative frequency-based backdoor attack via wavelet packet decomposition (WPD), which could finely decompose the original image into multiple sub-spectrograms with semantic information. It facilitates us to accurately identify the most critical frequency regions to effectively insert the trigger into the victim image, such that the trigger information could be sufficiently learned to form the backdoor. The proposed attack stands out for its exceptional effectiveness, stealthiness, and resistance at an extremely low poisoning ratio. Notably, it achieves the <span><math><mrow><mn>98.12</mn><mspace></mspace><mo>%</mo></mrow></math></span> attack success rate on CIFAR-10 with an extremely low poisoning ratio of <span><math><mrow><mn>0.004</mn><mspace></mspace><mo>%</mo></mrow></math></span> (<em>i.e.</em>, only 2 poisoned samples among 50,000 training samples), and bypasses several advanced backdoor defenses. Besides, we provide more extensive experiments to demonstrate the efficacy of the proposed method, as well as in-depth analyses to explain its underlying mechanism.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"194 \",\"pages\":\"Article 108074\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025009542\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025009542","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

这项工作探讨了后门攻击,这是一种针对深度神经网络(dnn)的新兴安全威胁。攻击者的目标是通过操纵一部分训练样本向模型注入后门,这样后门就可以被一个特定的触发器激活,从而在推理时做出目标预测。目前,现有的后门攻击通常需要中等或较高的中毒比率来达到预期的攻击性能,但使它们容易受到一些高级后门防御(例如,中毒样本检测)的影响。一种可能的解决方案是在低中毒率下提高攻击性能,由于其高挑战性,很少研究。为了实现这一目标,我们提出了一种基于小波包分解(WPD)的基于频率的后门攻击方法,该方法可以将原始图像精细地分解为包含语义信息的多个子谱图。它有助于我们准确地识别最关键的频率区域,从而有效地将触发器插入到受害者图像中,从而使触发器信息能够被充分学习以形成后门。所提出的攻击以其卓越的有效性,隐蔽性和极低中毒率的抵抗力而脱颖而出。值得注意的是,它对CIFAR-10的攻击成功率达到了98.12%,中毒率极低,仅为0.004%(即5万个训练样本中只有2个中毒样本),并且绕过了几个高级后门防御。此外,我们提供了更广泛的实验来证明所提出方法的有效性,并进行了深入的分析来解释其潜在机制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
WPDA: frequency-based backdoor attack with wavelet packet decomposition
This work explores backdoor attack, which is an emerging security threat against deep neural networks (DNNs). The adversary aims to inject a backdoor into the model by manipulating a portion of training samples, such that the backdoor could be activated by a particular trigger to make a target prediction at inference. Currently, existing backdoor attacks often require moderate or high poisoning ratios to achieve the desired attack performance, but making them susceptible to some advanced backdoor defenses (e.g., poisoned sample detection). One possible solution to this dilemma is enhancing the attack performance at low poisoning ratios, which has been rarely studied due to its high challenge. To achieve this goal, we propose an innovative frequency-based backdoor attack via wavelet packet decomposition (WPD), which could finely decompose the original image into multiple sub-spectrograms with semantic information. It facilitates us to accurately identify the most critical frequency regions to effectively insert the trigger into the victim image, such that the trigger information could be sufficiently learned to form the backdoor. The proposed attack stands out for its exceptional effectiveness, stealthiness, and resistance at an extremely low poisoning ratio. Notably, it achieves the 98.12% attack success rate on CIFAR-10 with an extremely low poisoning ratio of 0.004% (i.e., only 2 poisoned samples among 50,000 training samples), and bypasses several advanced backdoor defenses. Besides, we provide more extensive experiments to demonstrate the efficacy of the proposed method, as well as in-depth analyses to explain its underlying mechanism.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信