{"title":"Scattering-guided class-irrelevant filtering for adversarially robust SAR automatic target recognition","authors":"Zhunga Liu, Jialin Lyu, Yimin Fu","doi":"10.1016/j.sigpro.2025.110273","DOIUrl":null,"url":null,"abstract":"<div><div>The vulnerability of deep neural networks (DNNs) to adversarial perturbations severely constrains their deployment in real-world applications. A common approach to defend against such perturbations is to perform input reconstruction based on image representations. However, the lack of visual intuitiveness in synthetic aperture radar (SAR) images severely complicates the reconstruction of critical target information, making the adversarial robustness of SAR automatic target recognition (ATR) systems difficult to guarantee. To address this problem, we propose a scattering-guided class-irrelevant filtering variational autoencoder (SGCIF-VAE) for adversarially robust SAR ATR. Specifically, the proposed method incorporates scattering and image-based representations to reconstruct target information from adversarial examples through feature representation and information filtering. First, strong scattering points of the target are exploited to guide the extraction of topological features, which exhibit stronger stability against adversarial perturbations than visual features. Then, a weighting reconstruction mechanism (WRM) is applied to the reconstructed image to supplement the spatial structural information. Consequently, the attention shifts induced by adversarial perturbations are effectively resisted. During training, class-relevant and class-irrelevant information are explicitly separated via a class-disentanglement variational loss (CDVL). Moreover, a bi-directional information bottleneck (BDIB) is employed to amplify the disparity in mutual information of latent variables between the input and reconstructed images, further facilitating the filtering of class-irrelevant information. Extensive experimental results on the MSTAR dataset demonstrate that SGCIF-VAE achieves superior adversarial robustness across various operating conditions. The proposed method achieves over 90% accuracy against weak perturbations and above 60% against stronger ones. The code will be released at <span><span>https://github.com/jialinlvcn/SGCIF-VAE</span><svg><path></path></svg></span> upon acceptance.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"239 ","pages":"Article 110273"},"PeriodicalIF":3.6000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165168425003871","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The vulnerability of deep neural networks (DNNs) to adversarial perturbations severely constrains their deployment in real-world applications. A common approach to defend against such perturbations is to perform input reconstruction based on image representations. However, the lack of visual intuitiveness in synthetic aperture radar (SAR) images severely complicates the reconstruction of critical target information, making the adversarial robustness of SAR automatic target recognition (ATR) systems difficult to guarantee. To address this problem, we propose a scattering-guided class-irrelevant filtering variational autoencoder (SGCIF-VAE) for adversarially robust SAR ATR. Specifically, the proposed method incorporates scattering and image-based representations to reconstruct target information from adversarial examples through feature representation and information filtering. First, strong scattering points of the target are exploited to guide the extraction of topological features, which exhibit stronger stability against adversarial perturbations than visual features. Then, a weighting reconstruction mechanism (WRM) is applied to the reconstructed image to supplement the spatial structural information. Consequently, the attention shifts induced by adversarial perturbations are effectively resisted. During training, class-relevant and class-irrelevant information are explicitly separated via a class-disentanglement variational loss (CDVL). Moreover, a bi-directional information bottleneck (BDIB) is employed to amplify the disparity in mutual information of latent variables between the input and reconstructed images, further facilitating the filtering of class-irrelevant information. Extensive experimental results on the MSTAR dataset demonstrate that SGCIF-VAE achieves superior adversarial robustness across various operating conditions. The proposed method achieves over 90% accuracy against weak perturbations and above 60% against stronger ones. The code will be released at https://github.com/jialinlvcn/SGCIF-VAE upon acceptance.
期刊介绍:
Signal Processing incorporates all aspects of the theory and practice of signal processing. It features original research work, tutorial and review articles, and accounts of practical developments. It is intended for a rapid dissemination of knowledge and experience to engineers and scientists working in the research, development or practical application of signal processing.
Subject areas covered by the journal include: Signal Theory; Stochastic Processes; Detection and Estimation; Spectral Analysis; Filtering; Signal Processing Systems; Software Developments; Image Processing; Pattern Recognition; Optical Signal Processing; Digital Signal Processing; Multi-dimensional Signal Processing; Communication Signal Processing; Biomedical Signal Processing; Geophysical and Astrophysical Signal Processing; Earth Resources Signal Processing; Acoustic and Vibration Signal Processing; Data Processing; Remote Sensing; Signal Processing Technology; Radar Signal Processing; Sonar Signal Processing; Industrial Applications; New Applications.