多视点SAR目标识别的视点干涉与特征对齐聚合框架

IF 5.3 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Qijun Dai;Gong Zhang;Biao Xue;Lifeng Liu;Lipo Wang
{"title":"多视点SAR目标识别的视点干涉与特征对齐聚合框架","authors":"Qijun Dai;Gong Zhang;Biao Xue;Lifeng Liu;Lipo Wang","doi":"10.1109/JSTARS.2025.3614695","DOIUrl":null,"url":null,"abstract":"Multiview synthetic aperture radar (SAR) automatic target recognition (ATR) has attracted increasing attention for its ability to integrate effective information from multiple images. However, the existing algorithms have ignored the interplay between the multiview combination and the multiview network, failing to explore the inherent coupling relationship within multiview images. To tackle these issues, a multiview SAR ATR framework called view intervention and feature alignment aggregation is proposed. First, a deep clustering-based multiview combination is designed. Images with sufficient complementary information are selected from the raw SAR data under each category to form multiview images according to image features, which are the latent features obtained by the autoencoder (AE). Next, an efficient multiview feature alignment aggregation (Mv-FAA) network is proposed, in which the encoder of the AE serves as the feature extraction module. By designing a hybrid loss function to guide the training of the Mv-FAA network, it can extract complementary features from multiview images while retaining certain consistent features so that the final holistic features of the target are obtained for discrimination. The proposed framework strengthens the link between the multiview combination and the multiview network to reconcile the complementary and consistent information within multiview images, providing valuable insights for advancing multiview SAR ATR research. The experimental results on the Moving and Stationary Target Recognition and the Full Aspect Stationary Targets-Vehicle datasets have achieved state-of-the-art performance.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"25177-25191"},"PeriodicalIF":5.3000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11181158","citationCount":"0","resultStr":"{\"title\":\"View Intervention and Feature Alignment Aggregation Framework for Multiview SAR Target Recognition\",\"authors\":\"Qijun Dai;Gong Zhang;Biao Xue;Lifeng Liu;Lipo Wang\",\"doi\":\"10.1109/JSTARS.2025.3614695\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multiview synthetic aperture radar (SAR) automatic target recognition (ATR) has attracted increasing attention for its ability to integrate effective information from multiple images. However, the existing algorithms have ignored the interplay between the multiview combination and the multiview network, failing to explore the inherent coupling relationship within multiview images. To tackle these issues, a multiview SAR ATR framework called view intervention and feature alignment aggregation is proposed. First, a deep clustering-based multiview combination is designed. Images with sufficient complementary information are selected from the raw SAR data under each category to form multiview images according to image features, which are the latent features obtained by the autoencoder (AE). Next, an efficient multiview feature alignment aggregation (Mv-FAA) network is proposed, in which the encoder of the AE serves as the feature extraction module. By designing a hybrid loss function to guide the training of the Mv-FAA network, it can extract complementary features from multiview images while retaining certain consistent features so that the final holistic features of the target are obtained for discrimination. The proposed framework strengthens the link between the multiview combination and the multiview network to reconcile the complementary and consistent information within multiview images, providing valuable insights for advancing multiview SAR ATR research. The experimental results on the Moving and Stationary Target Recognition and the Full Aspect Stationary Targets-Vehicle datasets have achieved state-of-the-art performance.\",\"PeriodicalId\":13116,\"journal\":{\"name\":\"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing\",\"volume\":\"18 \",\"pages\":\"25177-25191\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2025-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11181158\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11181158/\",\"RegionNum\":2,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11181158/","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

多视点合成孔径雷达(SAR)的自动目标识别(ATR)因其能够整合多幅图像的有效信息而受到越来越多的关注。然而,现有算法忽略了多视图组合与多视图网络之间的相互作用,未能探索多视图图像内部的内在耦合关系。为了解决这些问题,提出了一种多视图SAR ATR框架,称为视图干预和特征对齐聚合。首先,设计了一种基于深度聚类的多视图组合。根据图像特征(即自动编码器(autoencoder, AE)获得的潜在特征),从每一类SAR原始数据中选择互补信息充足的图像形成多视点图像。然后,以声发射的编码器作为特征提取模块,提出了一种高效的多视点特征对齐聚合(Mv-FAA)网络。通过设计混合损失函数来指导Mv-FAA网络的训练,可以在保留一定一致性特征的同时,从多视图图像中提取互补特征,从而得到目标的最终整体特征进行判别。该框架加强了多视图组合和多视图网络之间的联系,以协调多视图图像中互补和一致的信息,为推进多视图SAR ATR研究提供了有价值的见解。在运动目标识别和静止目标识别以及全面向静止目标-车辆数据集上的实验结果达到了最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
View Intervention and Feature Alignment Aggregation Framework for Multiview SAR Target Recognition
Multiview synthetic aperture radar (SAR) automatic target recognition (ATR) has attracted increasing attention for its ability to integrate effective information from multiple images. However, the existing algorithms have ignored the interplay between the multiview combination and the multiview network, failing to explore the inherent coupling relationship within multiview images. To tackle these issues, a multiview SAR ATR framework called view intervention and feature alignment aggregation is proposed. First, a deep clustering-based multiview combination is designed. Images with sufficient complementary information are selected from the raw SAR data under each category to form multiview images according to image features, which are the latent features obtained by the autoencoder (AE). Next, an efficient multiview feature alignment aggregation (Mv-FAA) network is proposed, in which the encoder of the AE serves as the feature extraction module. By designing a hybrid loss function to guide the training of the Mv-FAA network, it can extract complementary features from multiview images while retaining certain consistent features so that the final holistic features of the target are obtained for discrimination. The proposed framework strengthens the link between the multiview combination and the multiview network to reconcile the complementary and consistent information within multiview images, providing valuable insights for advancing multiview SAR ATR research. The experimental results on the Moving and Stationary Target Recognition and the Full Aspect Stationary Targets-Vehicle datasets have achieved state-of-the-art performance.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.30
自引率
10.90%
发文量
563
审稿时长
4.7 months
期刊介绍: The IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing addresses the growing field of applications in Earth observations and remote sensing, and also provides a venue for the rapidly expanding special issues that are being sponsored by the IEEE Geosciences and Remote Sensing Society. The journal draws upon the experience of the highly successful “IEEE Transactions on Geoscience and Remote Sensing” and provide a complementary medium for the wide range of topics in applied earth observations. The ‘Applications’ areas encompasses the societal benefit areas of the Global Earth Observations Systems of Systems (GEOSS) program. Through deliberations over two years, ministers from 50 countries agreed to identify nine areas where Earth observation could positively impact the quality of life and health of their respective countries. Some of these are areas not traditionally addressed in the IEEE context. These include biodiversity, health and climate. Yet it is the skill sets of IEEE members, in areas such as observations, communications, computers, signal processing, standards and ocean engineering, that form the technical underpinnings of GEOSS. Thus, the Journal attracts a broad range of interests that serves both present members in new ways and expands the IEEE visibility into new areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信