Siyuan Wang;Yinghua Wang;Xiaoting Zhang;Chen Zhang;Hongwei Liu
{"title":"Visual-Semantic Cooperative Learning for Few-Shot SAR Target Classification","authors":"Siyuan Wang;Yinghua Wang;Xiaoting Zhang;Chen Zhang;Hongwei Liu","doi":"10.1109/JSTARS.2025.3530442","DOIUrl":null,"url":null,"abstract":"Nowadays, meta-learning is the mainstream method for solving few-shot synthetic aperture radar (SAR) target classification, devoted to learning a lot of empirical knowledge from the source domain to quickly recognize the novel classes after seeing only a few samples. However, obtaining the source domain with sufficiently labeled SAR images is difficult, leading to limited transferable empirical knowledge from the source to the target domain. Moreover, most existing methods only rely on visual images to learn the targets' feature representations, resulting in poor feature discriminability in few-shot situations. To tackle the above problems, we propose a novel visual-semantic cooperative network (VSC-Net) that involves visual and semantic dual classification to compensate for the inaccuracy of visual classification through semantic classification. First, we design textual semantic descriptions of SAR targets to exploit rich semantic information. Then, the designed textual semantic descriptions are encoded by the text encoder of the pretrained large vision language model to obtain class semantic embeddings of targets. In the visual classification stage, we develop the semantic-based visual prototype calibration module to project the class semantic embeddings to the visual space to calibrate the visual prototypes, improving the reliability of the prototypes computed from a few support samples. Besides, semantic consistency loss is proposed to constrain the accuracy of the class semantic embeddings projected to the visual space. During the semantic classification stage, the visual features of query samples are mapped into the semantic space, and their classes are predicted via searching for the nearest class semantic embeddings. Furthermore, we introduce a visual indication loss to modify the semantic classification using the calibrated visual prototypes. Ultimately, query samples' classes are decided by merging the visual and semantic classification results. We conduct adequate experiments on the SAR target dataset, which validate VSC-Net's few-shot classification efficacy.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"18 ","pages":"6532-6550"},"PeriodicalIF":4.7000,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10843851","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10843851/","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Nowadays, meta-learning is the mainstream method for solving few-shot synthetic aperture radar (SAR) target classification, devoted to learning a lot of empirical knowledge from the source domain to quickly recognize the novel classes after seeing only a few samples. However, obtaining the source domain with sufficiently labeled SAR images is difficult, leading to limited transferable empirical knowledge from the source to the target domain. Moreover, most existing methods only rely on visual images to learn the targets' feature representations, resulting in poor feature discriminability in few-shot situations. To tackle the above problems, we propose a novel visual-semantic cooperative network (VSC-Net) that involves visual and semantic dual classification to compensate for the inaccuracy of visual classification through semantic classification. First, we design textual semantic descriptions of SAR targets to exploit rich semantic information. Then, the designed textual semantic descriptions are encoded by the text encoder of the pretrained large vision language model to obtain class semantic embeddings of targets. In the visual classification stage, we develop the semantic-based visual prototype calibration module to project the class semantic embeddings to the visual space to calibrate the visual prototypes, improving the reliability of the prototypes computed from a few support samples. Besides, semantic consistency loss is proposed to constrain the accuracy of the class semantic embeddings projected to the visual space. During the semantic classification stage, the visual features of query samples are mapped into the semantic space, and their classes are predicted via searching for the nearest class semantic embeddings. Furthermore, we introduce a visual indication loss to modify the semantic classification using the calibrated visual prototypes. Ultimately, query samples' classes are decided by merging the visual and semantic classification results. We conduct adequate experiments on the SAR target dataset, which validate VSC-Net's few-shot classification efficacy.
期刊介绍:
The IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing addresses the growing field of applications in Earth observations and remote sensing, and also provides a venue for the rapidly expanding special issues that are being sponsored by the IEEE Geosciences and Remote Sensing Society. The journal draws upon the experience of the highly successful “IEEE Transactions on Geoscience and Remote Sensing” and provide a complementary medium for the wide range of topics in applied earth observations. The ‘Applications’ areas encompasses the societal benefit areas of the Global Earth Observations Systems of Systems (GEOSS) program. Through deliberations over two years, ministers from 50 countries agreed to identify nine areas where Earth observation could positively impact the quality of life and health of their respective countries. Some of these are areas not traditionally addressed in the IEEE context. These include biodiversity, health and climate. Yet it is the skill sets of IEEE members, in areas such as observations, communications, computers, signal processing, standards and ocean engineering, that form the technical underpinnings of GEOSS. Thus, the Journal attracts a broad range of interests that serves both present members in new ways and expands the IEEE visibility into new areas.