Kejia Zhang , Juanjuan Weng , Yuanzheng Cai , Shaozi Li , Zhiming Luo
{"title":"减轻低频偏置:对抗鲁棒性的特征重新校准和频率注意正则化","authors":"Kejia Zhang , Juanjuan Weng , Yuanzheng Cai , Shaozi Li , Zhiming Luo","doi":"10.1016/j.neunet.2025.108070","DOIUrl":null,"url":null,"abstract":"<div><div>Ensuring the robustness of deep neural networks against adversarial attacks remains a fundamental challenge in computer vision. While adversarial training (AT) has emerged as a promising defense strategy, our analysis reveals a critical limitation: AT-trained models exhibit a bias toward low-frequency features while neglecting high-frequency components. This bias is particularly concerning as each frequency component carries distinct and crucial information: low-frequency features encode fundamental structural patterns, while high-frequency features capture intricate details and textures. To address this limitation, we propose High-Frequency Feature Disentanglement and Recalibration (HFDR), a novel module that strategically separates and recalibrates frequency-specific features to capture latent semantic cues. We further introduce frequency attention regularization to harmonize feature extraction across the frequency spectrum and mitigate the inherent low-frequency bias of AT. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that HFDR consistently enhances adversarial robustness. It achieves a 2.89 % gain on CIFAR-100 with WRN34-10, and improves robustness by 3.09 % on ImageNet-1K, with a 4.89 % gain on ViT-B against AutoAttack. These results highlight the method’s adaptability to both convolutional and transformer-based architectures. Code is available at <span><span>https://github.com/KejiaZhang-Robust/HFDR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"193 ","pages":"Article 108070"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mitigating low-frequency bias: Feature recalibration and frequency attention regularization for adversarial robustness\",\"authors\":\"Kejia Zhang , Juanjuan Weng , Yuanzheng Cai , Shaozi Li , Zhiming Luo\",\"doi\":\"10.1016/j.neunet.2025.108070\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Ensuring the robustness of deep neural networks against adversarial attacks remains a fundamental challenge in computer vision. While adversarial training (AT) has emerged as a promising defense strategy, our analysis reveals a critical limitation: AT-trained models exhibit a bias toward low-frequency features while neglecting high-frequency components. This bias is particularly concerning as each frequency component carries distinct and crucial information: low-frequency features encode fundamental structural patterns, while high-frequency features capture intricate details and textures. To address this limitation, we propose High-Frequency Feature Disentanglement and Recalibration (HFDR), a novel module that strategically separates and recalibrates frequency-specific features to capture latent semantic cues. We further introduce frequency attention regularization to harmonize feature extraction across the frequency spectrum and mitigate the inherent low-frequency bias of AT. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that HFDR consistently enhances adversarial robustness. It achieves a 2.89 % gain on CIFAR-100 with WRN34-10, and improves robustness by 3.09 % on ImageNet-1K, with a 4.89 % gain on ViT-B against AutoAttack. These results highlight the method’s adaptability to both convolutional and transformer-based architectures. Code is available at <span><span>https://github.com/KejiaZhang-Robust/HFDR</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"193 \",\"pages\":\"Article 108070\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-09-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608025009505\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025009505","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Mitigating low-frequency bias: Feature recalibration and frequency attention regularization for adversarial robustness
Ensuring the robustness of deep neural networks against adversarial attacks remains a fundamental challenge in computer vision. While adversarial training (AT) has emerged as a promising defense strategy, our analysis reveals a critical limitation: AT-trained models exhibit a bias toward low-frequency features while neglecting high-frequency components. This bias is particularly concerning as each frequency component carries distinct and crucial information: low-frequency features encode fundamental structural patterns, while high-frequency features capture intricate details and textures. To address this limitation, we propose High-Frequency Feature Disentanglement and Recalibration (HFDR), a novel module that strategically separates and recalibrates frequency-specific features to capture latent semantic cues. We further introduce frequency attention regularization to harmonize feature extraction across the frequency spectrum and mitigate the inherent low-frequency bias of AT. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that HFDR consistently enhances adversarial robustness. It achieves a 2.89 % gain on CIFAR-100 with WRN34-10, and improves robustness by 3.09 % on ImageNet-1K, with a 4.89 % gain on ViT-B against AutoAttack. These results highlight the method’s adaptability to both convolutional and transformer-based architectures. Code is available at https://github.com/KejiaZhang-Robust/HFDR.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.