Zheng Wang,Xing Xu,Lei Zhu,Yi Bin,Guoqing Wang,Yang Yang,Heng Tao Shen
{"title":"Evidence-based Multi-Feature Fusion for Adversarial Robustness.","authors":"Zheng Wang,Xing Xu,Lei Zhu,Yi Bin,Guoqing Wang,Yang Yang,Heng Tao Shen","doi":"10.1109/tpami.2025.3582518","DOIUrl":null,"url":null,"abstract":"The accumulation of adversarial perturbations in the feature space makes it impossible for Deep Neural Networks (DNNs) to know what features are robust and reliable, and thus DNNs can be fooled by relying on a single contaminated feature. Numerous defense strategies attempt to improve their robustness by denoising, deactivating, or recalibrating non-robust features. Despite their effectiveness, we still argue that these methods are under-explored in terms of determining how trustworthy the features are. To address this issue, we propose a novel Evidence-based Multi-Feature Fusion (termed EMFF) for adversarial robustness. Specifically, our EMFF approach introduces evidential deep learning to help DNNs quantify the belief mass and uncertainty of the contaminated features. Subsequently, a novel multi-feature evidential fusion mechanism based on Dempster's rule is proposed to fuse the trusted features of multiple blocks within an architecture, which further helps DNNs avoid the induction of a single manipulated feature and thus improve their robustness. Comprehensive experiments confirm that compared with existing defense techniques, our novel EMFF method has obvious advantages and effectiveness in both scenarios of white-box and black-box attacks, and also prove that by integrating into several adversarial training strategies, we can improve the robustness of across distinct architectures, including traditional CNNs and recent vision Transformers with a few extra parameters and almost the same cost.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"269 1","pages":""},"PeriodicalIF":20.8000,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Pattern Analysis and Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tpami.2025.3582518","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The accumulation of adversarial perturbations in the feature space makes it impossible for Deep Neural Networks (DNNs) to know what features are robust and reliable, and thus DNNs can be fooled by relying on a single contaminated feature. Numerous defense strategies attempt to improve their robustness by denoising, deactivating, or recalibrating non-robust features. Despite their effectiveness, we still argue that these methods are under-explored in terms of determining how trustworthy the features are. To address this issue, we propose a novel Evidence-based Multi-Feature Fusion (termed EMFF) for adversarial robustness. Specifically, our EMFF approach introduces evidential deep learning to help DNNs quantify the belief mass and uncertainty of the contaminated features. Subsequently, a novel multi-feature evidential fusion mechanism based on Dempster's rule is proposed to fuse the trusted features of multiple blocks within an architecture, which further helps DNNs avoid the induction of a single manipulated feature and thus improve their robustness. Comprehensive experiments confirm that compared with existing defense techniques, our novel EMFF method has obvious advantages and effectiveness in both scenarios of white-box and black-box attacks, and also prove that by integrating into several adversarial training strategies, we can improve the robustness of across distinct architectures, including traditional CNNs and recent vision Transformers with a few extra parameters and almost the same cost.
期刊介绍:
The IEEE Transactions on Pattern Analysis and Machine Intelligence publishes articles on all traditional areas of computer vision and image understanding, all traditional areas of pattern analysis and recognition, and selected areas of machine intelligence, with a particular emphasis on machine learning for pattern analysis. Areas such as techniques for visual search, document and handwriting analysis, medical image analysis, video and image sequence analysis, content-based retrieval of image and video, face and gesture recognition and relevant specialized hardware and/or software architectures are also covered.