基于多模态视图对比学习方法的小样本SAR目标识别

Yilin Li;Chengyu Wan;Xiaoyan Zhou;Tao Tang
{"title":"基于多模态视图对比学习方法的小样本SAR目标识别","authors":"Yilin Li;Chengyu Wan;Xiaoyan Zhou;Tao Tang","doi":"10.1109/LGRS.2025.3557534","DOIUrl":null,"url":null,"abstract":"Self-supervised contrastive learning methods offer a promising approach to the small-sample synthetic aperture radar (SAR) automatic target recognition (ATR) problem by autonomously acquiring valuable visual representation from unlabeled data. However, current self-supervised contrastive learning methods primarily generate supervisory signals through augmented views of the original images, thereby underusing the rich information inherent in SAR images. To overcome this limitation, we integrate SAR targets’ geometric and physical properties, as captured in SAR target segmentation semantic maps and attribute scattering center reconstruction maps into the contrastive learning stage. Moreover, we propose a novel multimodal views’ contrastive learning method which contains two stages. In the contrastive learning stage, we leverage a large amount of unlabeled data for both intramodal and cross-modal contrastive learning, thereby transferring discriminative information from these two views to the original image features to learn the feature representation. In the supervised training stage, the linear classifier is trained using a small number of labeled samples to partition the feature representation space and migrate to the downstream recognition task. The experimental results demonstrate that the proposed method achieves superior recognition performance in SAR small-sample ATR tasks and exhibits robust generalization capabilities, thereby providing additional discriminative information that augments target representation.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Small-Sample SAR Target Recognition Using a Multimodal Views Contrastive Learning Method\",\"authors\":\"Yilin Li;Chengyu Wan;Xiaoyan Zhou;Tao Tang\",\"doi\":\"10.1109/LGRS.2025.3557534\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Self-supervised contrastive learning methods offer a promising approach to the small-sample synthetic aperture radar (SAR) automatic target recognition (ATR) problem by autonomously acquiring valuable visual representation from unlabeled data. However, current self-supervised contrastive learning methods primarily generate supervisory signals through augmented views of the original images, thereby underusing the rich information inherent in SAR images. To overcome this limitation, we integrate SAR targets’ geometric and physical properties, as captured in SAR target segmentation semantic maps and attribute scattering center reconstruction maps into the contrastive learning stage. Moreover, we propose a novel multimodal views’ contrastive learning method which contains two stages. In the contrastive learning stage, we leverage a large amount of unlabeled data for both intramodal and cross-modal contrastive learning, thereby transferring discriminative information from these two views to the original image features to learn the feature representation. In the supervised training stage, the linear classifier is trained using a small number of labeled samples to partition the feature representation space and migrate to the downstream recognition task. The experimental results demonstrate that the proposed method achieves superior recognition performance in SAR small-sample ATR tasks and exhibits robust generalization capabilities, thereby providing additional discriminative information that augments target representation.\",\"PeriodicalId\":91017,\"journal\":{\"name\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"volume\":\"22 \",\"pages\":\"1-5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10948483/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10948483/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

自监督对比学习方法通过从未标记数据中自主获取有价值的视觉表征,为小样本合成孔径雷达(SAR)自动目标识别(ATR)问题提供了一种有前途的方法。然而,目前的自监督对比学习方法主要是通过原始图像的增强视图来产生监督信号,从而没有充分利用SAR图像固有的丰富信息。为了克服这一局限性,我们将SAR目标的几何和物理属性(SAR目标分割语义图和属性散射中心重建图)整合到对比学习阶段。此外,我们还提出了一种新的多模态观点对比学习方法,该方法包含两个阶段。在对比学习阶段,我们利用大量未标记的数据进行模态内对比学习和跨模态对比学习,从而将这两种视图中的判别信息传递到原始图像特征上,学习特征表示。在监督训练阶段,使用少量标记样本对线性分类器进行训练,划分特征表示空间并迁移到下游识别任务。实验结果表明,该方法在SAR小样本ATR任务中取得了优异的识别性能,并具有鲁棒的泛化能力,从而提供了额外的判别信息,增强了目标表征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Small-Sample SAR Target Recognition Using a Multimodal Views Contrastive Learning Method
Self-supervised contrastive learning methods offer a promising approach to the small-sample synthetic aperture radar (SAR) automatic target recognition (ATR) problem by autonomously acquiring valuable visual representation from unlabeled data. However, current self-supervised contrastive learning methods primarily generate supervisory signals through augmented views of the original images, thereby underusing the rich information inherent in SAR images. To overcome this limitation, we integrate SAR targets’ geometric and physical properties, as captured in SAR target segmentation semantic maps and attribute scattering center reconstruction maps into the contrastive learning stage. Moreover, we propose a novel multimodal views’ contrastive learning method which contains two stages. In the contrastive learning stage, we leverage a large amount of unlabeled data for both intramodal and cross-modal contrastive learning, thereby transferring discriminative information from these two views to the original image features to learn the feature representation. In the supervised training stage, the linear classifier is trained using a small number of labeled samples to partition the feature representation space and migrate to the downstream recognition task. The experimental results demonstrate that the proposed method achieves superior recognition performance in SAR small-sample ATR tasks and exhibits robust generalization capabilities, thereby providing additional discriminative information that augments target representation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信