Zhengyao Song , Yongqiang Li , Danni Yuan , Li Liu , Shaokui Wei , Baoyuan Wu
{"title":"基于小波包分解的基于频率的后门攻击。","authors":"Zhengyao Song , Yongqiang Li , Danni Yuan , Li Liu , Shaokui Wei , Baoyuan Wu","doi":"10.1016/j.neunet.2025.108074","DOIUrl":null,"url":null,"abstract":"<div><div>This work explores backdoor attack, which is an emerging security threat against deep neural networks (DNNs). The adversary aims to inject a backdoor into the model by manipulating a portion of training samples, such that the backdoor could be activated by a particular trigger to make a target prediction at inference. Currently, existing backdoor attacks often require moderate or high poisoning ratios to achieve the desired attack performance, but making them susceptible to some advanced backdoor defenses (<span><math><mrow><mi>e</mi><mo>.</mo><mi>g</mi><mo>.</mo></mrow></math></span>, poisoned sample detection). One possible solution to this dilemma is enhancing the attack performance at low poisoning ratios, which has been rarely studied due to its high challenge. To achieve this goal, we propose an innovative frequency-based backdoor attack via wavelet packet decomposition (WPD), which could finely decompose the original image into multiple sub-spectrograms with semantic information. It facilitates us to accurately identify the most critical frequency regions to effectively insert the trigger into the victim image, such that the trigger information could be sufficiently learned to form the backdoor. The proposed attack stands out for its exceptional effectiveness, stealthiness, and resistance at an extremely low poisoning ratio. Notably, it achieves the <span><math><mrow><mn>98.12</mn><mspace></mspace><mo>%</mo></mrow></math></span> attack success rate on CIFAR-10 with an extremely low poisoning ratio of <span><math><mrow><mn>0.004</mn><mspace></mspace><mo>%</mo></mrow></math></span> (<em>i.e.</em>, only 2 poisoned samples among 50,000 training samples), and bypasses several advanced backdoor defenses. Besides, we provide more extensive experiments to demonstrate the efficacy of the proposed method, as well as in-depth analyses to explain its underlying mechanism.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108074"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"WPDA: frequency-based backdoor attack with wavelet packet decomposition\",\"authors\":\"Zhengyao Song , Yongqiang Li , Danni Yuan , Li Liu , Shaokui Wei , Baoyuan Wu\",\"doi\":\"10.1016/j.neunet.2025.108074\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>This work explores backdoor attack, which is an emerging security threat against deep neural networks (DNNs). The adversary aims to inject a backdoor into the model by manipulating a portion of training samples, such that the backdoor could be activated by a particular trigger to make a target prediction at inference. Currently, existing backdoor attacks often require moderate or high poisoning ratios to achieve the desired attack performance, but making them susceptible to some advanced backdoor defenses (<span><math><mrow><mi>e</mi><mo>.</mo><mi>g</mi><mo>.</mo></mrow></math></span>, poisoned sample detection). One possible solution to this dilemma is enhancing the attack performance at low poisoning ratios, which has been rarely studied due to its high challenge. To achieve this goal, we propose an innovative frequency-based backdoor attack via wavelet packet decomposition (WPD), which could finely decompose the original image into multiple sub-spectrograms with semantic information. It facilitates us to accurately identify the most critical frequency regions to effectively insert the trigger into the victim image, such that the trigger information could be sufficiently learned to form the backdoor. The proposed attack stands out for its exceptional effectiveness, stealthiness, and resistance at an extremely low poisoning ratio. Notably, it achieves the <span><math><mrow><mn>98.12</mn><mspace></mspace><mo>%</mo></mrow></math></span> attack success rate on CIFAR-10 with an extremely low poisoning ratio of <span><math><mrow><mn>0.004</mn><mspace></mspace><mo>%</mo></mrow></math></span> (<em>i.e.</em>, only 2 poisoned samples among 50,000 training samples), and bypasses several advanced backdoor defenses. Besides, we provide more extensive experiments to demonstrate the efficacy of the proposed method, as well as in-depth analyses to explain its underlying mechanism.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"194 \",\"pages\":\"Article 108074\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025009542\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025009542","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
WPDA: frequency-based backdoor attack with wavelet packet decomposition
This work explores backdoor attack, which is an emerging security threat against deep neural networks (DNNs). The adversary aims to inject a backdoor into the model by manipulating a portion of training samples, such that the backdoor could be activated by a particular trigger to make a target prediction at inference. Currently, existing backdoor attacks often require moderate or high poisoning ratios to achieve the desired attack performance, but making them susceptible to some advanced backdoor defenses (, poisoned sample detection). One possible solution to this dilemma is enhancing the attack performance at low poisoning ratios, which has been rarely studied due to its high challenge. To achieve this goal, we propose an innovative frequency-based backdoor attack via wavelet packet decomposition (WPD), which could finely decompose the original image into multiple sub-spectrograms with semantic information. It facilitates us to accurately identify the most critical frequency regions to effectively insert the trigger into the victim image, such that the trigger information could be sufficiently learned to form the backdoor. The proposed attack stands out for its exceptional effectiveness, stealthiness, and resistance at an extremely low poisoning ratio. Notably, it achieves the attack success rate on CIFAR-10 with an extremely low poisoning ratio of (i.e., only 2 poisoned samples among 50,000 training samples), and bypasses several advanced backdoor defenses. Besides, we provide more extensive experiments to demonstrate the efficacy of the proposed method, as well as in-depth analyses to explain its underlying mechanism.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.