MMFF: Multiview and multi-level feature fusion method within limited sample conditions for SAR image target recognition

IF 10.6 1区 地球科学 Q1 GEOGRAPHY, PHYSICAL
Benyuan Lv , Ying Luo , Jiacheng Ni , Siyuan Zhao , Jia Liang , Yingxi Liu , Qun Zhang
{"title":"MMFF: Multiview and multi-level feature fusion method within limited sample conditions for SAR image target recognition","authors":"Benyuan Lv ,&nbsp;Ying Luo ,&nbsp;Jiacheng Ni ,&nbsp;Siyuan Zhao ,&nbsp;Jia Liang ,&nbsp;Yingxi Liu ,&nbsp;Qun Zhang","doi":"10.1016/j.isprsjprs.2025.03.010","DOIUrl":null,"url":null,"abstract":"<div><div>The fusion of SAR image features from multiple views can effectively improve the recognition performance of SAR ATR tasks. However, when the number of raw samples in SAR images is limited, multiple fusions of SAR image features from different views of the same class may result in significant feature redundancy, causing overfitting of the model. To solve those problems, we propose a multiview and multi-level feature fusion (MMFF) method that can extract richer features from extremely limited raw data. Firstly, we design a new multiview feature fusion (NMFF) module to reduce feature redundancy generated by fusing features from the same class but from different views. This module uses multiple feature fusion methods to fuse features from different views, effectively reducing feature redundancy and alleviating model overfitting. Then, we design a multiview multi-class random feature extraction (MMRFE) module to extract inter-class separability features and intra-class similarity features and fuse them with multiview features. The MMRFE module enables the network to learn inter-class separability between different classes and intra-class similarity between the same classes, thereby improving the network’s recognition ability in extremely limited data. Finally, to further increase inter-class separability and intra-class similarity, we design a coarse classifier to perform coarse classification on inter-class separability features and intra-class similarity features. The coarse classifier increases inter-class separability and intra-class similarity by calculating classification loss to affect updating network parameters. Experimental results demonstrate that when trained with 10 SAR images per class, our algorithm achieves recognition rates of 92.53 % and 80.50 % on the MSTAR dataset and Civilian Vehicle dataset, respectively, outperforming state-of-the-art methods by at least 3.2 % and 3.94 % in classification accuracy.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"224 ","pages":"Pages 302-316"},"PeriodicalIF":10.6000,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092427162500108X","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The fusion of SAR image features from multiple views can effectively improve the recognition performance of SAR ATR tasks. However, when the number of raw samples in SAR images is limited, multiple fusions of SAR image features from different views of the same class may result in significant feature redundancy, causing overfitting of the model. To solve those problems, we propose a multiview and multi-level feature fusion (MMFF) method that can extract richer features from extremely limited raw data. Firstly, we design a new multiview feature fusion (NMFF) module to reduce feature redundancy generated by fusing features from the same class but from different views. This module uses multiple feature fusion methods to fuse features from different views, effectively reducing feature redundancy and alleviating model overfitting. Then, we design a multiview multi-class random feature extraction (MMRFE) module to extract inter-class separability features and intra-class similarity features and fuse them with multiview features. The MMRFE module enables the network to learn inter-class separability between different classes and intra-class similarity between the same classes, thereby improving the network’s recognition ability in extremely limited data. Finally, to further increase inter-class separability and intra-class similarity, we design a coarse classifier to perform coarse classification on inter-class separability features and intra-class similarity features. The coarse classifier increases inter-class separability and intra-class similarity by calculating classification loss to affect updating network parameters. Experimental results demonstrate that when trained with 10 SAR images per class, our algorithm achieves recognition rates of 92.53 % and 80.50 % on the MSTAR dataset and Civilian Vehicle dataset, respectively, outperforming state-of-the-art methods by at least 3.2 % and 3.94 % in classification accuracy.
MMFF:有限样本条件下SAR图像目标识别的多视角多层次特征融合方法
多视点SAR图像特征融合可以有效提高SAR ATR任务的识别性能。然而,当SAR图像的原始样本数量有限时,同一类不同视角的SAR图像特征的多次融合可能导致显著的特征冗余,导致模型过拟合。为了解决这些问题,我们提出了一种多视图多层次特征融合(MMFF)方法,可以从极其有限的原始数据中提取更丰富的特征。首先,设计了一种新的多视图特征融合(NMFF)模块,减少了同一类不同视图特征融合产生的特征冗余;该模块采用多种特征融合方法,融合不同视角的特征,有效减少特征冗余,缓解模型过拟合。然后,设计了多视图多类随机特征提取(MMRFE)模块,提取类间可分性特征和类内相似性特征,并将其与多视图特征融合。MMRFE模块使网络能够学习不同类之间的类间可分性和相同类之间的类内相似性,从而提高网络在极有限数据下的识别能力。最后,为了进一步提高类间可分性和类内相似性,我们设计了一个粗分类器,对类间可分性特征和类内相似性特征进行粗分类。粗分类器通过计算分类损失来影响网络参数的更新,从而提高类间可分离性和类内相似性。实验结果表明,当每类训练10张SAR图像时,我们的算法在MSTAR数据集和民用车辆数据集上分别达到92.53%和80.50%的识别率,分类准确率比目前最先进的方法至少高出3.2%和3.94%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ISPRS Journal of Photogrammetry and Remote Sensing
ISPRS Journal of Photogrammetry and Remote Sensing 工程技术-成像科学与照相技术
CiteScore
21.00
自引率
6.30%
发文量
273
审稿时长
40 days
期刊介绍: The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive. P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields. In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信