面向对抗鲁棒SAR自动目标识别的散射制导类无关滤波

IF 3.6 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC
Zhunga Liu, Jialin Lyu, Yimin Fu
{"title":"面向对抗鲁棒SAR自动目标识别的散射制导类无关滤波","authors":"Zhunga Liu,&nbsp;Jialin Lyu,&nbsp;Yimin Fu","doi":"10.1016/j.sigpro.2025.110273","DOIUrl":null,"url":null,"abstract":"<div><div>The vulnerability of deep neural networks (DNNs) to adversarial perturbations severely constrains their deployment in real-world applications. A common approach to defend against such perturbations is to perform input reconstruction based on image representations. However, the lack of visual intuitiveness in synthetic aperture radar (SAR) images severely complicates the reconstruction of critical target information, making the adversarial robustness of SAR automatic target recognition (ATR) systems difficult to guarantee. To address this problem, we propose a scattering-guided class-irrelevant filtering variational autoencoder (SGCIF-VAE) for adversarially robust SAR ATR. Specifically, the proposed method incorporates scattering and image-based representations to reconstruct target information from adversarial examples through feature representation and information filtering. First, strong scattering points of the target are exploited to guide the extraction of topological features, which exhibit stronger stability against adversarial perturbations than visual features. Then, a weighting reconstruction mechanism (WRM) is applied to the reconstructed image to supplement the spatial structural information. Consequently, the attention shifts induced by adversarial perturbations are effectively resisted. During training, class-relevant and class-irrelevant information are explicitly separated via a class-disentanglement variational loss (CDVL). Moreover, a bi-directional information bottleneck (BDIB) is employed to amplify the disparity in mutual information of latent variables between the input and reconstructed images, further facilitating the filtering of class-irrelevant information. Extensive experimental results on the MSTAR dataset demonstrate that SGCIF-VAE achieves superior adversarial robustness across various operating conditions. The proposed method achieves over 90% accuracy against weak perturbations and above 60% against stronger ones. The code will be released at <span><span>https://github.com/jialinlvcn/SGCIF-VAE</span><svg><path></path></svg></span> upon acceptance.</div></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"239 ","pages":"Article 110273"},"PeriodicalIF":3.6000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Scattering-guided class-irrelevant filtering for adversarially robust SAR automatic target recognition\",\"authors\":\"Zhunga Liu,&nbsp;Jialin Lyu,&nbsp;Yimin Fu\",\"doi\":\"10.1016/j.sigpro.2025.110273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The vulnerability of deep neural networks (DNNs) to adversarial perturbations severely constrains their deployment in real-world applications. A common approach to defend against such perturbations is to perform input reconstruction based on image representations. However, the lack of visual intuitiveness in synthetic aperture radar (SAR) images severely complicates the reconstruction of critical target information, making the adversarial robustness of SAR automatic target recognition (ATR) systems difficult to guarantee. To address this problem, we propose a scattering-guided class-irrelevant filtering variational autoencoder (SGCIF-VAE) for adversarially robust SAR ATR. Specifically, the proposed method incorporates scattering and image-based representations to reconstruct target information from adversarial examples through feature representation and information filtering. First, strong scattering points of the target are exploited to guide the extraction of topological features, which exhibit stronger stability against adversarial perturbations than visual features. Then, a weighting reconstruction mechanism (WRM) is applied to the reconstructed image to supplement the spatial structural information. Consequently, the attention shifts induced by adversarial perturbations are effectively resisted. During training, class-relevant and class-irrelevant information are explicitly separated via a class-disentanglement variational loss (CDVL). Moreover, a bi-directional information bottleneck (BDIB) is employed to amplify the disparity in mutual information of latent variables between the input and reconstructed images, further facilitating the filtering of class-irrelevant information. Extensive experimental results on the MSTAR dataset demonstrate that SGCIF-VAE achieves superior adversarial robustness across various operating conditions. The proposed method achieves over 90% accuracy against weak perturbations and above 60% against stronger ones. The code will be released at <span><span>https://github.com/jialinlvcn/SGCIF-VAE</span><svg><path></path></svg></span> upon acceptance.</div></div>\",\"PeriodicalId\":49523,\"journal\":{\"name\":\"Signal Processing\",\"volume\":\"239 \",\"pages\":\"Article 110273\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2025-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0165168425003871\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165168425003871","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络(dnn)对对抗性扰动的脆弱性严重限制了其在实际应用中的部署。防御这种扰动的一种常见方法是基于图像表示执行输入重建。然而,合成孔径雷达(SAR)图像缺乏视觉直观性,严重影响了关键目标信息的重建,使得SAR自动目标识别(ATR)系统的对抗鲁棒性难以保证。为了解决这个问题,我们提出了一种用于对抗鲁棒SAR ATR的散射制导类无关滤波变分自编码器(SGCIF-VAE)。具体而言,该方法结合了散射和基于图像的表示,通过特征表示和信息过滤,从对抗样本中重建目标信息。首先,利用目标的强散射点来指导拓扑特征的提取,拓扑特征对对抗性扰动具有比视觉特征更强的稳定性。然后,利用加权重构机制(WRM)对重构图像进行空间结构信息的补充;因此,对抗性扰动引起的注意力转移被有效地抵抗。在训练过程中,通过类解纠缠变分损失(class-disentanglement variational loss, CDVL)显式地分离类相关和类无关的信息。利用双向信息瓶颈(bidirectional information bottleneck, BDIB)放大输入图像和重构图像之间潜在变量互信息的差异,进一步促进类无关信息的过滤。在MSTAR数据集上的大量实验结果表明,SGCIF-VAE在各种操作条件下都具有优越的对抗鲁棒性。该方法对弱摄动的精度达到90%以上,对强摄动的精度达到60%以上。一经接受,代码将在https://github.com/jialinlvcn/SGCIF-VAE上发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Scattering-guided class-irrelevant filtering for adversarially robust SAR automatic target recognition
The vulnerability of deep neural networks (DNNs) to adversarial perturbations severely constrains their deployment in real-world applications. A common approach to defend against such perturbations is to perform input reconstruction based on image representations. However, the lack of visual intuitiveness in synthetic aperture radar (SAR) images severely complicates the reconstruction of critical target information, making the adversarial robustness of SAR automatic target recognition (ATR) systems difficult to guarantee. To address this problem, we propose a scattering-guided class-irrelevant filtering variational autoencoder (SGCIF-VAE) for adversarially robust SAR ATR. Specifically, the proposed method incorporates scattering and image-based representations to reconstruct target information from adversarial examples through feature representation and information filtering. First, strong scattering points of the target are exploited to guide the extraction of topological features, which exhibit stronger stability against adversarial perturbations than visual features. Then, a weighting reconstruction mechanism (WRM) is applied to the reconstructed image to supplement the spatial structural information. Consequently, the attention shifts induced by adversarial perturbations are effectively resisted. During training, class-relevant and class-irrelevant information are explicitly separated via a class-disentanglement variational loss (CDVL). Moreover, a bi-directional information bottleneck (BDIB) is employed to amplify the disparity in mutual information of latent variables between the input and reconstructed images, further facilitating the filtering of class-irrelevant information. Extensive experimental results on the MSTAR dataset demonstrate that SGCIF-VAE achieves superior adversarial robustness across various operating conditions. The proposed method achieves over 90% accuracy against weak perturbations and above 60% against stronger ones. The code will be released at https://github.com/jialinlvcn/SGCIF-VAE upon acceptance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Signal Processing
Signal Processing 工程技术-工程:电子与电气
CiteScore
9.20
自引率
9.10%
发文量
309
审稿时长
41 days
期刊介绍: Signal Processing incorporates all aspects of the theory and practice of signal processing. It features original research work, tutorial and review articles, and accounts of practical developments. It is intended for a rapid dissemination of knowledge and experience to engineers and scientists working in the research, development or practical application of signal processing. Subject areas covered by the journal include: Signal Theory; Stochastic Processes; Detection and Estimation; Spectral Analysis; Filtering; Signal Processing Systems; Software Developments; Image Processing; Pattern Recognition; Optical Signal Processing; Digital Signal Processing; Multi-dimensional Signal Processing; Communication Signal Processing; Biomedical Signal Processing; Geophysical and Astrophysical Signal Processing; Earth Resources Signal Processing; Acoustic and Vibration Signal Processing; Data Processing; Remote Sensing; Signal Processing Technology; Radar Signal Processing; Sonar Signal Processing; Industrial Applications; New Applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信